-
Notifications
You must be signed in to change notification settings - Fork 626
[Feat] Support "proxy" arch between coordinator and prover #1701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughAdds a coordinator proxy subsystem: CLI/server, upstream HTTP clients and managers, per-prover session management with optional persistence, proxy-aware auth/middleware, get_task/submit_proof proxy controllers, DB migrations, libzkp reorganization, mapstructure-based Response.DecodeData, tests, Docker/CI and build assets. (28 words) Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant P as Prover (client)
participant PX as Proxy Server
participant UP as Upstream Coordinator
note right of PX `#f0f4c3`: Login & session orchestration
P->>PX: GET /coordinator/v1/challenge
PX->>P: challenge token
P->>PX: POST /coordinator/v1/login
PX->>UP: GET /coordinator/v1/challenge
UP->>PX: upstream challenge
PX->>UP: POST /coordinator/v1/login (proxy-signed or forwarded)
UP->>PX: login token
PX->>PX: cache/persist session & token
PX->>P: return JWT
note right of PX `#e8f5e9`: Task retrieval flow
P->>PX: POST /coordinator/v1/get_task (JWT)
PX->>PX: resolve identity, pick upstream (priority)
PX->>UP: POST /coordinator/v1/get_task (with token)
UP->>PX: task
PX->>PX: set priority for prover
PX->>P: task (annotated with upstream)
note right of PX `#fff3e0`: Proof submission flow
P->>PX: POST /coordinator/v1/submit_proof (taskID includes upstream)
PX->>PX: resolve upstream from taskID
PX->>UP: POST /coordinator/v1/submit_proof (with token)
UP->>PX: success
PX->>PX: clear priority mapping
PX->>P: success
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas needing extra attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
4c31d33 to
64ef0f4
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #1701 +/- ##
===========================================
- Coverage 36.53% 35.30% -1.23%
===========================================
Files 247 259 +12
Lines 21190 22234 +1044
===========================================
+ Hits 7742 7850 +108
- Misses 12618 13556 +938
+ Partials 830 828 -2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (7)
coordinator/internal/logic/libzkp/message_types.go (1)
8-13: Consider enhancing the documentation for TaskType constants.The comment "matching the Rust enum" is vague. Consider adding details about which Rust enum this matches and the purpose of these constants.
Example:
-// TaskType enum values matching the Rust enum +// TaskType represents the type of ZK proof task (Chunk, Batch, or Bundle). +// These values must match the corresponding Rust enum in libzkp. const ( - TaskTypeChunk = 0 - TaskTypeBatch = 1 - TaskTypeBundle = 2 + TaskTypeChunk = 0 // Chunk-level proof + TaskTypeBatch = 1 // Batch-level proof + TaskTypeBundle = 2 // Bundle-level proof )coordinator/internal/logic/libzkp/lib_mock.go (2)
9-25: Remove commented-out code.The commented functions appear to be deprecated. If these are no longer needed, remove them to improve code clarity. If they might be needed in the future, rely on version control history instead.
Apply this diff to clean up:
-// // InitVerifier is a no-op in the mock. -// func InitVerifier(configJSON string) {} - -// // VerifyChunkProof returns a fixed success in the mock. -// func VerifyChunkProof(proofData, forkName string) bool { -// return true -// } - -// // VerifyBatchProof returns a fixed success in the mock. -// func VerifyBatchProof(proofData, forkName string) bool { -// return true -// } - -// // VerifyBundleProof returns a fixed success in the mock. -// func VerifyBundleProof(proofData, forkName string) bool { -// return true -// } -
27-29: Add documentation explaining why this function panics.The panic with "should not run here" is unclear. Add a comment explaining that this function is not supported in mock mode and why.
Apply this diff:
+// UniversalTaskCompatibilityFix is not supported in mock_verifier mode. +// This function should not be called when using the mock verifier. func UniversalTaskCompatibilityFix(taskJSON string) (string, error) { - panic("should not run here") + panic("UniversalTaskCompatibilityFix is not supported in mock_verifier mode") }common/types/response.go (1)
17-28: Consider enabling WeaklyTypedInput for JSON compatibility.The current decoder configuration may fail when JSON numeric types don't exactly match Go field types (e.g., JSON numbers decoded as
float64but target field isint). Sinceresp.Datatypically holds unmarshaled JSON (map[string]any), enablingWeaklyTypedInput: truewould allow weak type conversions and improve robustness.Apply this diff to enable weak typing:
func (resp *Response) DecodeData(out interface{}) error { // Decode generically unmarshaled JSON (map[string]any, []any) into a typed struct // honoring `json` tags and allowing weak type conversions. dec, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ - TagName: "json", - Result: out, + TagName: "json", + WeaklyTypedInput: true, + Result: out, }) if err != nil { return err } return dec.Decode(resp.Data) }coordinator/conf/config_proxy.json (1)
1-31: Document that this is a template configuration with insecure defaults.The configuration file contains several settings unsuitable for production:
- Placeholder secrets (lines 5, 8)
sslmode=disablefor database connection (line 27)- HTTP instead of HTTPS for coordinator URL (line 19)
Consider adding a comment header or README documentation clarifying that this is a template/example configuration requiring customization for production use.
coordinator/internal/types/response_test.go (1)
11-48: Good test coverage, consider adding edge case tests.The test properly validates the happy path for DecodeData. Consider adding tests for edge cases such as:
- Empty or nil
Datafield- Type mismatches (e.g., JSON numbers vs. Go int types)
- Invalid data structure
coordinator/internal/config/proxy_config.go (1)
55-74: Consider calling Normalize() automatically in the constructor.The
Normalizemethod is defined but not called withinNewProxyConfig. This requires callers to remember to normalize the config manually. For convenience and consistency, consider callingcfg.ProxyManager.Normalize()before returning.Apply this diff to auto-normalize:
// Override config with environment variables err = utils.OverrideConfigWithEnv(cfg, "SCROLL_COORDINATOR_PROXY") if err != nil { return nil, err } + // Apply defaults from nested config + if cfg.ProxyManager != nil { + cfg.ProxyManager.Normalize() + } + return cfg, nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
common/types/response.go(2 hunks)coordinator/Makefile(1 hunks)coordinator/cmd/proxy/app/app.go(1 hunks)coordinator/cmd/proxy/app/flags.go(1 hunks)coordinator/cmd/proxy/main.go(1 hunks)coordinator/conf/config_proxy.json(1 hunks)coordinator/internal/config/proxy_config.go(1 hunks)coordinator/internal/controller/api/auth.go(2 hunks)coordinator/internal/controller/proxy/auth.go(1 hunks)coordinator/internal/controller/proxy/client.go(1 hunks)coordinator/internal/controller/proxy/client_manager.go(1 hunks)coordinator/internal/controller/proxy/controller.go(1 hunks)coordinator/internal/controller/proxy/get_task.go(1 hunks)coordinator/internal/controller/proxy/persistent.go(1 hunks)coordinator/internal/controller/proxy/prover_session.go(1 hunks)coordinator/internal/controller/proxy/prover_session_test.go(1 hunks)coordinator/internal/controller/proxy/submit_proof.go(1 hunks)coordinator/internal/logic/auth/login.go(2 hunks)coordinator/internal/logic/libzkp/lib.go(1 hunks)coordinator/internal/logic/libzkp/lib_mock.go(1 hunks)coordinator/internal/logic/libzkp/message_types.go(1 hunks)coordinator/internal/logic/libzkp/mock_universal_task.go(2 hunks)coordinator/internal/logic/libzkp/universal_task.go(2 hunks)coordinator/internal/middleware/challenge_jwt.go(2 hunks)coordinator/internal/middleware/login_jwt.go(1 hunks)coordinator/internal/route/route.go(2 hunks)coordinator/internal/types/prover.go(2 hunks)coordinator/internal/types/response_test.go(1 hunks)coordinator/test/api_test.go(7 hunks)coordinator/test/mock_prover.go(9 hunks)coordinator/test/proxy_test.go(1 hunks)database/migrate/migrate.go(2 hunks)database/migrate/migrations/proxy/0001_running_tables.sql(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (24)
coordinator/internal/middleware/challenge_jwt.go (2)
coordinator/internal/config/config.go (1)
Auth(44-48)coordinator/internal/controller/api/controller.go (1)
Auth(22-22)
coordinator/internal/logic/libzkp/mock_universal_task.go (1)
coordinator/internal/logic/libzkp/universal_task.go (1)
GenerateUniversalTask(24-26)
coordinator/test/mock_prover.go (2)
coordinator/internal/types/prover.go (2)
ProverType(20-20)MakeProverType(47-56)common/types/message/message.go (1)
ProofType(14-14)
coordinator/cmd/proxy/app/app.go (8)
common/utils/flags.go (2)
CommonFlags(9-20)ConfigFileFlag(30-34)common/utils/logger.go (1)
LogSetup(15-42)common/utils/simulation.go (2)
RegisterSimulation(30-40)CoordinatorAPIApp(24-24)coordinator/internal/config/proxy_config.go (3)
NewProxyConfig(55-74)ProxyManager(13-19)ProxyConfig(48-52)common/database/db.go (2)
InitDB(48-80)CloseDB(83-92)common/observability/server.go (1)
Server(24-53)coordinator/internal/controller/proxy/controller.go (1)
InitController(24-44)coordinator/internal/route/route.go (1)
ProxyRoute(45-53)
coordinator/internal/logic/libzkp/lib_mock.go (1)
coordinator/internal/logic/libzkp/lib.go (4)
UniversalTaskCompatibilityFix(127-141)GenerateWrappedProof(76-98)DumpVk(101-117)SetDynamicFeature(120-124)
coordinator/cmd/proxy/main.go (1)
coordinator/cmd/proxy/app/app.go (1)
Run(115-121)
coordinator/internal/controller/proxy/controller.go (6)
coordinator/internal/controller/proxy/get_task.go (3)
GetTaskController(102-109)NewPriorityUpstreamManagerPersistent(56-61)NewGetTaskController(112-119)coordinator/internal/controller/proxy/submit_proof.go (2)
SubmitProofController(18-22)NewSubmitProofController(25-31)coordinator/internal/controller/proxy/auth.go (2)
AuthController(18-22)NewAuthController(29-44)coordinator/internal/controller/proxy/client_manager.go (2)
Client(18-22)NewClientManager(56-70)coordinator/internal/config/proxy_config.go (2)
ProxyConfig(48-52)ProxyManager(13-19)coordinator/internal/controller/proxy/prover_session.go (1)
NewProverManagerWithPersistent(34-41)
coordinator/internal/controller/api/auth.go (6)
coordinator/internal/logic/auth/login.go (3)
LoginLogic(22-29)NewLoginLogic(48-50)VerifyMsg(69-77)coordinator/internal/controller/proxy/auth.go (2)
AuthController(18-22)NewAuthController(29-44)coordinator/internal/config/config.go (2)
Config(51-56)ProverManager(13-29)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)coordinator/internal/types/auth.go (4)
ProverProviderTypeKey(22-22)LoginParameter(58-62)Message(42-49)PublicKey(16-16)coordinator/internal/types/prover.go (1)
ProverProviderTypeProxy(82-82)
coordinator/internal/config/proxy_config.go (2)
coordinator/internal/config/config.go (1)
VerifierConfig(67-71)common/utils/utils.go (1)
OverrideConfigWithEnv(87-135)
coordinator/internal/middleware/login_jwt.go (4)
coordinator/internal/controller/proxy/controller.go (1)
Auth(16-16)coordinator/internal/config/config.go (1)
Auth(44-48)coordinator/internal/controller/api/controller.go (1)
Auth(22-22)coordinator/internal/types/auth.go (1)
PublicKey(16-16)
coordinator/internal/types/response_test.go (2)
coordinator/internal/types/get_task.go (1)
GetTaskSchema(12-19)common/types/response.go (1)
Response(11-15)
coordinator/test/proxy_test.go (6)
coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)common/version/version.go (1)
Version(31-31)coordinator/internal/controller/proxy/client_manager.go (2)
NewClientManager(56-70)Client(18-22)common/types/message/message.go (1)
ProofTypeChunk(33-33)common/types/errno.go (1)
ErrCoordinatorEmptyProofData(28-28)common/types/db.go (2)
ProvingStatus(140-140)ProvingTaskVerified(152-152)
coordinator/internal/logic/libzkp/message_types.go (1)
common/types/message/message.go (4)
ProofType(14-14)ProofTypeChunk(33-33)ProofTypeBatch(35-35)ProofTypeBundle(37-37)
coordinator/internal/route/route.go (6)
coordinator/internal/middleware/challenge_jwt.go (1)
ChallengeMiddleware(17-50)coordinator/internal/controller/proxy/controller.go (3)
Auth(16-16)GetTask(12-12)SubmitProof(14-14)coordinator/internal/config/config.go (1)
Auth(44-48)coordinator/internal/middleware/login_jwt.go (2)
LoginMiddleware(21-46)ProxyLoginMiddleware(49-74)common/observability/middleware.go (1)
Use(11-18)coordinator/internal/config/proxy_config.go (2)
ProxyConfig(48-52)ProxyManager(13-19)
coordinator/internal/controller/proxy/auth.go (9)
coordinator/internal/controller/api/auth.go (3)
AuthController(18-20)NewAuthController(29-33)NewAuthControllerWithLogic(22-26)coordinator/internal/controller/proxy/controller.go (1)
Clients(21-21)coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/config/proxy_config.go (2)
ProxyConfig(48-52)ProxyManager(13-19)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)coordinator/internal/logic/auth/login.go (1)
NewLoginLogicWithSimpleDeduplicator(43-45)coordinator/internal/types/auth.go (7)
LoginParameterWithHardForkName(52-55)LoginParameter(58-62)Message(42-49)PublicKey(16-16)ProverName(18-18)ProverVersion(20-20)ProverProviderTypeKey(22-22)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)
coordinator/internal/controller/proxy/get_task.go (8)
coordinator/internal/types/auth.go (2)
PublicKey(16-16)ProverName(18-18)common/types/response.go (2)
RenderFailure(50-52)RenderSuccess(45-47)common/types/errno.go (3)
ErrCoordinatorParameterInvalidNo(22-22)ErrCoordinatorGetTaskFailure(24-24)ErrCoordinatorEmptyProofData(28-28)coordinator/internal/controller/proxy/persistent.go (1)
NewProverPriorityPersist(87-89)coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/controller/proxy/controller.go (2)
Clients(21-21)GetTask(12-12)coordinator/internal/types/get_task.go (2)
GetTaskParameter(4-9)GetTaskSchema(12-19)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)
coordinator/internal/controller/proxy/prover_session_test.go (1)
coordinator/internal/controller/proxy/prover_session.go (1)
NewProverManager(26-32)
coordinator/internal/controller/proxy/submit_proof.go (6)
coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/controller/proxy/controller.go (2)
Clients(21-21)SubmitProof(14-14)coordinator/internal/controller/proxy/get_task.go (1)
PriorityUpstreamManager(42-46)coordinator/internal/types/submit_proof.go (1)
SubmitProofParameter(4-13)common/types/response.go (2)
RenderFailure(50-52)RenderSuccess(45-47)common/types/errno.go (2)
ErrCoordinatorParameterInvalidNo(22-22)ErrCoordinatorGetTaskFailure(24-24)
coordinator/internal/controller/proxy/client.go (4)
coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)coordinator/internal/config/proxy_config.go (1)
UpStream(39-45)coordinator/internal/types/auth.go (3)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)common/types/response.go (1)
Response(11-15)
coordinator/internal/controller/proxy/prover_session.go (8)
coordinator/internal/controller/proxy/persistent.go (1)
NewProverDataPersist(17-19)coordinator/internal/types/auth.go (5)
LoginSchema(28-31)LoginParameter(58-62)Message(42-49)ProverName(18-18)PublicKey(16-16)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)common/types/errno.go (1)
ErrJWTTokenExpired(12-12)coordinator/internal/types/get_task.go (1)
GetTaskParameter(4-9)common/types/response.go (1)
Response(11-15)coordinator/internal/controller/proxy/auth.go (1)
LoginParamCache(24-24)coordinator/internal/types/submit_proof.go (1)
SubmitProofParameter(4-13)
coordinator/internal/logic/auth/login.go (7)
coordinator/internal/config/config.go (1)
VerifierConfig(67-71)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)coordinator/internal/orm/challenge.go (1)
NewChallenge(25-27)coordinator/internal/types/auth.go (4)
LoginParameter(58-62)Message(42-49)ProverName(18-18)ProverVersion(20-20)common/version/prover_version.go (2)
CheckScrollRepoVersion(37-55)CheckScrollProverVersion(11-28)common/version/version.go (1)
Version(31-31)coordinator/internal/types/prover.go (5)
ProverProviderType(59-59)ProverProviderTypeInternal(78-78)ProverProviderTypeExternal(80-80)ProverProviderTypeProxy(82-82)ProverProviderTypeUndefined(76-76)
coordinator/internal/controller/proxy/persistent.go (1)
coordinator/internal/types/auth.go (2)
PublicKey(16-16)LoginSchema(28-31)
coordinator/internal/controller/proxy/client_manager.go (3)
coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)coordinator/internal/types/auth.go (5)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)ProverName(18-18)ProverVersion(20-20)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)
coordinator/test/api_test.go (7)
coordinator/internal/config/config.go (4)
Config(51-56)VerifierConfig(67-71)AssetConfig(59-64)Auth(44-48)coordinator/internal/config/proxy_config.go (3)
ProxyConfig(48-52)UpStream(39-45)ProxyManager(13-19)database/migrate/migrate.go (2)
ResetModuleDB(56-61)MigrateModule(30-38)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)coordinator/internal/controller/proxy/controller.go (2)
Auth(16-16)InitController(24-44)coordinator/internal/controller/api/controller.go (2)
Auth(22-22)InitController(26-48)coordinator/internal/route/route.go (1)
ProxyRoute(45-53)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (8)
coordinator/internal/logic/libzkp/mock_universal_task.go (1)
18-20: LGTM: Consistent wrapper implementation.The public wrapper correctly normalizes inputs (task type mapping and fork name lowercasing) before delegating to the internal implementation. This maintains API consistency between mock and production builds.
Note: The potential panic from
fromMessageTaskTypewas already flagged in the review of message_types.go.coordinator/internal/logic/libzkp/universal_task.go (1)
23-26: LGTM: Clean API wrapper with proper input normalization.The public wrapper correctly maps the task type and normalizes the fork name before delegating to the internal C-binding layer. The implementation is consistent with the mock version, ensuring predictable behavior across build configurations.
Note: The potential panic from
fromMessageTaskTypewas already flagged in the review of message_types.go.coordinator/internal/logic/libzkp/lib.go (1)
1-2: LGTM: Proper build constraint added for code organization.The build tag correctly separates mock and production implementations. The removed code has been appropriately relocated to
message_types.go(TaskType constants and mapping function) anduniversal_task.go(GenerateUniversalTask wrapper), improving code organization and maintainability.coordinator/internal/logic/libzkp/message_types.go (1)
15-26: Original review comment is incorrect and should be dismissed.The concern about untrusted input reaching
fromMessageTaskTypeis not valid. TaskType is always set internally from themessage.ProofTypeenum constants (0, 1, or 2), never from external input. The function's panic in the default case is a programming error check for the internal codebase, not a reachable DoS vector.Likely an incorrect or invalid review comment.
coordinator/cmd/proxy/app/flags.go (1)
1-30: LGTM!The CLI flag definitions are well-structured with appropriate defaults. The restrictive
localhostbinding and disabled-by-default HTTP server provide a secure initial configuration.coordinator/test/mock_prover.go (1)
37-38: LGTM!The token caching implementation is well-structured and consistent across all methods. The conditional re-authentication logic and defensive assertions ensure correct token management in test scenarios.
Also applies to: 55-61, 128-128, 158-165, 191-198, 269-275
coordinator/test/proxy_test.go (1)
1-284: LGTM!The test suite provides comprehensive coverage of proxy functionality including handshake, task retrieval, proof submission, and session persistence. The tests properly validate multi-coordinator routing and sticky session behavior.
coordinator/internal/middleware/challenge_jwt.go (1)
17-17: Good refactor to accept only required dependencies.The signature change from
*config.Configto*config.Authfollows the interface segregation principle. All call sites have been correctly updated to pass theAuthfield:conf.Authat route.go:29 andconf.ProxyManager.Authat route.go:58.
| import ( | ||
| "context" | ||
| "errors" | ||
| "fmt" | ||
| "net/http" | ||
| "os" | ||
| "os/signal" | ||
| "time" | ||
|
|
||
| "github.com/gin-gonic/gin" | ||
| "github.com/prometheus/client_golang/prometheus" | ||
| "github.com/scroll-tech/go-ethereum/log" | ||
| "github.com/urfave/cli/v2" | ||
| "gorm.io/gorm" | ||
|
|
||
| "scroll-tech/common/database" | ||
| "scroll-tech/common/observability" | ||
| "scroll-tech/common/utils" | ||
| "scroll-tech/common/version" | ||
|
|
||
| "scroll-tech/coordinator/internal/config" | ||
| "scroll-tech/coordinator/internal/controller/proxy" | ||
| "scroll-tech/coordinator/internal/route" | ||
| ) | ||
|
|
||
| var app *cli.App | ||
|
|
||
| func init() { | ||
| // Set up coordinator app info. | ||
| app = cli.NewApp() | ||
| app.Action = action | ||
| app.Name = "coordinator proxy" | ||
| app.Usage = "Proxy for multiple Scroll L2 Coordinators" | ||
| app.Version = version.Version | ||
| app.Flags = append(app.Flags, utils.CommonFlags...) | ||
| app.Flags = append(app.Flags, apiFlags...) | ||
| app.Before = func(ctx *cli.Context) error { | ||
| return utils.LogSetup(ctx) | ||
| } | ||
| // Register `coordinator-test` app for integration-test. | ||
| utils.RegisterSimulation(app, utils.CoordinatorAPIApp) | ||
| } | ||
|
|
||
| func action(ctx *cli.Context) error { | ||
| cfgFile := ctx.String(utils.ConfigFileFlag.Name) | ||
| cfg, err := config.NewProxyConfig(cfgFile) | ||
| if err != nil { | ||
| log.Crit("failed to load config file", "config file", cfgFile, "error", err) | ||
| } | ||
|
|
||
| var db *gorm.DB | ||
| if dbCfg := cfg.ProxyManager.DB; dbCfg != nil { | ||
| db, err = database.InitDB(cfg.ProxyManager.DB) | ||
| if err != nil { | ||
| log.Crit("failed to init db connection", "err", err) | ||
| } | ||
| defer func() { | ||
| if err = database.CloseDB(db); err != nil { | ||
| log.Error("can not close db connection", "error", err) | ||
| } | ||
| }() | ||
| observability.Server(ctx, db) | ||
| } | ||
| registry := prometheus.DefaultRegisterer | ||
|
|
||
| apiSrv := server(ctx, cfg, db, registry) | ||
|
|
||
| log.Info( | ||
| "Start coordinator api successfully.", | ||
| "version", version.Version, | ||
| ) | ||
|
|
||
| // Catch CTRL-C to ensure a graceful shutdown. | ||
| interrupt := make(chan os.Signal, 1) | ||
| signal.Notify(interrupt, os.Interrupt) | ||
|
|
||
| // Wait until the interrupt signal is received from an OS signal. | ||
| <-interrupt | ||
| log.Info("start shutdown coordinator proxy server ...") | ||
|
|
||
| closeCtx, cancelExit := context.WithTimeout(context.Background(), 5*time.Second) | ||
| defer cancelExit() | ||
| if err = apiSrv.Shutdown(closeCtx); err != nil { | ||
| log.Warn("shutdown coordinator proxy server failure", "error", err) | ||
| return nil | ||
| } | ||
|
|
||
| <-closeCtx.Done() | ||
| log.Info("coordinator proxy server exiting success") | ||
| return nil | ||
| } | ||
|
|
||
| func server(ctx *cli.Context, cfg *config.ProxyConfig, db *gorm.DB, reg prometheus.Registerer) *http.Server { | ||
| router := gin.New() | ||
| proxy.InitController(cfg, db, reg) | ||
| route.ProxyRoute(router, cfg, reg) | ||
| port := ctx.String(httpPortFlag.Name) | ||
| srv := &http.Server{ | ||
| Addr: fmt.Sprintf(":%s", port), | ||
| Handler: router, | ||
| ReadHeaderTimeout: time.Minute, | ||
| } | ||
|
|
||
| go func() { | ||
| if runServerErr := srv.ListenAndServe(); runServerErr != nil && !errors.Is(runServerErr, http.ErrServerClosed) { | ||
| log.Crit("run coordinator proxy http server failure", "error", runServerErr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle SIGTERM for graceful shutdown.
Orchestrators (systemd, Kubernetes, Docker) issue SIGTERM on shutdown. Because the proxy only listens for os.Interrupt, SIGTERM bypasses this handler and the process exits immediately without running Server.Shutdown, dropping in-flight requests. Please register SIGTERM alongside Interrupt so the graceful shutdown path always runs.
@@
-import (
+import (
"context"
"errors"
"fmt"
"net/http"
"os"
"os/signal"
+ "syscall"
"time"
@@
- signal.Notify(interrupt, os.Interrupt)
+ signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM)🤖 Prompt for AI Agents
In coordinator/cmd/proxy/app/app.go around lines 3 to 108, the shutdown handler
only listens for os.Interrupt so SIGTERM sent by orchestrators is ignored; add
syscall to the imports and register SIGTERM with signal.Notify (e.g.,
signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM)) so the graceful
shutdown path (apiSrv.Shutdown) runs for both SIGINT and SIGTERM.
| func (m *ProxyManager) Normalize() { | ||
| if m.Client.Secret == "" { | ||
| m.Client.Secret = m.Auth.Secret | ||
| } | ||
|
|
||
| if m.Client.ProxyVersion == "" { | ||
| m.Client.ProxyVersion = m.Verifier.MinProverVersion | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add nil checks to prevent potential panic.
The Normalize method accesses nested fields without nil checks. If Client, Auth, or Verifier are nil, this will panic.
Apply this diff to add defensive nil checks:
func (m *ProxyManager) Normalize() {
+ if m.Client == nil || m.Auth == nil || m.Verifier == nil {
+ return
+ }
+
if m.Client.Secret == "" {
m.Client.Secret = m.Auth.Secret
}
if m.Client.ProxyVersion == "" {
m.Client.ProxyVersion = m.Verifier.MinProverVersion
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (m *ProxyManager) Normalize() { | |
| if m.Client.Secret == "" { | |
| m.Client.Secret = m.Auth.Secret | |
| } | |
| if m.Client.ProxyVersion == "" { | |
| m.Client.ProxyVersion = m.Verifier.MinProverVersion | |
| } | |
| } | |
| func (m *ProxyManager) Normalize() { | |
| if m.Client == nil || m.Auth == nil || m.Verifier == nil { | |
| return | |
| } | |
| if m.Client.Secret == "" { | |
| m.Client.Secret = m.Auth.Secret | |
| } | |
| if m.Client.ProxyVersion == "" { | |
| m.Client.ProxyVersion = m.Verifier.MinProverVersion | |
| } | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/config/proxy_config.go around lines 21 to 29, the
Normalize method dereferences nested fields (m.Client, m.Auth, m.Verifier)
without nil checks which can panic; add defensive nil checks at the start
(return early if m == nil), ensure m.Client is non-nil (either return or
instantiate m.Client before assigning Secret/ProxyVersion), and check m.Auth and
m.Verifier are non-nil before reading their Secret or MinProverVersion and only
copy those values when the sources exist.
| var viaProxy bool | ||
| if proverType, proverTypeExist := c.Get(types.ProverProviderTypeKey); proverTypeExist { | ||
| proverType := uint8(proverType.(float64)) | ||
| viaProxy = proverType == types.ProverProviderTypeProxy | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unsafe type assertion in proxy detection.
proxy.AuthController marks proxy-originated logins by storing types.ProverProviderTypeProxy in the context. That value is a typed integer, so asserting it to float64 will panic at runtime. Normalize the interface value with a type switch instead of forcing a float64.
@@
- var viaProxy bool
- if proverType, proverTypeExist := c.Get(types.ProverProviderTypeKey); proverTypeExist {
- proverType := uint8(proverType.(float64))
- viaProxy = proverType == types.ProverProviderTypeProxy
- }
+ var viaProxy bool
+ if raw, exists := c.Get(types.ProverProviderTypeKey); exists {
+ switch v := raw.(type) {
+ case float64:
+ viaProxy = uint8(v) == types.ProverProviderTypeProxy
+ case int:
+ viaProxy = uint8(v) == types.ProverProviderTypeProxy
+ case uint8:
+ viaProxy = v == types.ProverProviderTypeProxy
+ case types.ProverProviderType:
+ viaProxy = v == types.ProverProviderTypeProxy
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var viaProxy bool | |
| if proverType, proverTypeExist := c.Get(types.ProverProviderTypeKey); proverTypeExist { | |
| proverType := uint8(proverType.(float64)) | |
| viaProxy = proverType == types.ProverProviderTypeProxy | |
| } | |
| var viaProxy bool | |
| if raw, exists := c.Get(types.ProverProviderTypeKey); exists { | |
| switch v := raw.(type) { | |
| case float64: | |
| viaProxy = uint8(v) == types.ProverProviderTypeProxy | |
| case int: | |
| viaProxy = uint8(v) == types.ProverProviderTypeProxy | |
| case uint8: | |
| viaProxy = v == types.ProverProviderTypeProxy | |
| case types.ProverProviderType: | |
| viaProxy = v == types.ProverProviderTypeProxy | |
| } | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/controller/api/auth.go around lines 41 to 45, the code
unsafely asserts the context value to float64 which can panic because the stored
prover type is a typed integer; replace the direct float64 assertion with a type
switch over the interface value (handle int, int32, int64, uint8, uint32,
float64, and the expected typed int) to normalize it to a uint8, then set
viaProxy = normalized == types.ProverProviderTypeProxy; ensure you also guard
for nil/missing values and avoid panics by defaulting to false on unexpected
types.
| func (p *PriorityUpstreamManager) Delete(key string) { | ||
| p.Lock() | ||
| defer p.Unlock() | ||
| delete(p.data, key) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Persist priority deletions as well.
Line 96 only evicts the in-memory record, but the row in priority_upstream stays untouched, so the very next Get call reloads the stale upstream from persistence and the proxy never actually drops priority. Please also delete the persistent entry before returning so failover can stick.
Apply this diff:
func (p *PriorityUpstreamManager) Delete(key string) {
- p.Lock()
- defer p.Unlock()
- delete(p.data, key)
+ if p.proverPriorityPersist != nil {
+ defer func() {
+ if err := p.proverPriorityPersist.Del(key); err != nil {
+ log.Error("persistent priority record delete failure", "error", err, "key", key)
+ }
+ }()
+ }
+ p.Lock()
+ defer p.Unlock()
+ delete(p.data, key)
}🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/get_task.go around lines 95 to 99, the
Delete method currently only evicts the in-memory entry and leaves the row in
priority_upstream, so a subsequent Get reloads the stale record; update Delete
to remove the persistent entry as well: call the persistence/store layer's
delete function for the given key (e.g., p.store.DeletePriorityUpstream or
equivalent) before removing p.data[key], handle and log any error returned (and
avoid silently proceeding if the persistent delete fails), and preserve
locking/consistency (acquire lock around the in-memory mutation but perform the
persistent delete such that the persistent state is removed before the in-memory
map is cleared).
| for _, n := range keys { | ||
| if err, code := getTask(ptc.clients[n]); err == nil && code == 0 { | ||
| // get task done | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't mask upstream failures as empty-task.
Lines 215-219 drop through whenever err != nil, so if every upstream call fails we end up responding with ErrCoordinatorEmptyProofData, hiding the outage from the prover. Capture the last error code/message and surface it via RenderFailure when we never get a successful/empty response.
Apply this diff:
- for _, n := range keys {
- if err, code := getTask(ptc.clients[n]); err == nil && code == 0 {
- // get task done
- return
- }
- }
-
- log.Debug("get no task from upstream", "cli", proverName)
- // if all get task failed, throw empty proof resp
- types.RenderFailure(ctx, types.ErrCoordinatorEmptyProofData, fmt.Errorf("get empty prover task"))
+ var lastErr error
+ var lastCode int
+ for _, n := range keys {
+ if err, code := getTask(ptc.clients[n]); err == nil && code == 0 {
+ // get task done
+ return
+ } else if err != nil {
+ lastErr = err
+ lastCode = code
+ }
+ }
+
+ if lastErr != nil {
+ types.RenderFailure(ctx, lastCode, lastErr)
+ return
+ }
+
+ log.Debug("get no task from upstream", "cli", proverName)
+ // if all get task failed, throw empty proof resp
+ types.RenderFailure(ctx, types.ErrCoordinatorEmptyProofData, fmt.Errorf("get empty prover task"))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/get_task.go around lines 214-219, the
loop currently ignores all upstream errors and falls through to respond with
ErrCoordinatorEmptyProofData; instead declare variables lastErr error and
lastCode int before the loop, update them inside the loop whenever a call does
not satisfy err == nil && code == 0 (i.e., set lastErr = err and lastCode = code
for non-success cases), and after the loop check if lastErr != nil or lastCode
!= 0 — if so call RenderFailure with the captured lastErr and lastCode to
surface the upstream failure; keep the existing immediate return when a call
succeeds (err == nil && code == 0).
| r.POST("/proxy_login", loginMiddleware.LoginHandler) | ||
| r.POST("/get_task", api.GetTask.GetTasks) | ||
| r.POST("/submit_proof", api.SubmitProof.SubmitProof) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unreachable /proxy_login endpoint
/proxy_login is registered inside the block guarded by loginMiddleware.MiddlewareFunc(), so callers must already present a valid login token before they can obtain one. That makes the new login flow unusable. Please move the route outside the authenticated group (and wrap it with the challenge middleware, just like /login) so proxy clients can actually log in.
func v1(router *gin.RouterGroup, conf *config.Config) {
r := router.Group("/v1")
challengeMiddleware := middleware.ChallengeMiddleware(conf.Auth)
r.GET("/challenge", challengeMiddleware.LoginHandler)
loginMiddleware := middleware.LoginMiddleware(conf.Auth)
r.POST("/login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler)
+ r.POST("/proxy_login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler)
// need jwt token api
r.Use(loginMiddleware.MiddlewareFunc())
{
- r.POST("/proxy_login", loginMiddleware.LoginHandler)
r.POST("/get_task", api.GetTask.GetTasks)
r.POST("/submit_proof", api.SubmitProof.SubmitProof)
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| r.POST("/proxy_login", loginMiddleware.LoginHandler) | |
| r.POST("/get_task", api.GetTask.GetTasks) | |
| r.POST("/submit_proof", api.SubmitProof.SubmitProof) | |
| } | |
| func v1(router *gin.RouterGroup, conf *config.Config) { | |
| r := router.Group("/v1") | |
| challengeMiddleware := middleware.ChallengeMiddleware(conf.Auth) | |
| r.GET("/challenge", challengeMiddleware.LoginHandler) | |
| loginMiddleware := middleware.LoginMiddleware(conf.Auth) | |
| r.POST("/login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler) | |
| r.POST("/proxy_login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler) | |
| // need jwt token api | |
| r.Use(loginMiddleware.MiddlewareFunc()) | |
| { | |
| r.POST("/get_task", api.GetTask.GetTasks) | |
| r.POST("/submit_proof", api.SubmitProof.SubmitProof) | |
| } | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/route/route.go around lines 38 to 41, the /proxy_login
route is incorrectly registered inside the group protected by
loginMiddleware.MiddlewareFunc(), making it unreachable for clients that need to
obtain a token; move r.POST("/proxy_login", loginMiddleware.LoginHandler) out of
that authenticated group and register it alongside /login, wrapped with the same
challenge middleware used for /login so proxy clients can access the login
endpoint without an existing token.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (2)
coordinator/internal/controller/proxy/prover_session.go (1)
289-292: Fix invalid fmt verb (duplicate issue).Line 291 still uses the invalid fmt verb
%Vinstead of%v, causing unusable error output. This issue was previously flagged but not yet addressed.Apply this diff:
newToken, err := c.maintainLogin(ctx, cliMgr, up, loginParam, token.phase) if err != nil { - return nil, fmt.Errorf("update prover token fail: %V", err) + return nil, fmt.Errorf("update prover token fail: %v", err) }coordinator/internal/controller/proxy/auth.go (1)
66-84: Verify the async fire-and-forget pattern for proxy login is intentional.The Login method spawns goroutines to call
ProxyLoginon all clients and returns immediately without waiting for completion. This creates a fire-and-forget pattern where:
- Line 66 uses
context.Background()instead ofc.Request.Context(), so proxy logins continue even if the original HTTP request is cancelled.- Lines 73-77 log errors but don't propagate them to the caller.
- Line 86 returns success even if all proxy logins fail.
This appears intentional for async processing, but please confirm:
- Is it acceptable for the login endpoint to return success while upstream logins might be failing?
- Should the timeout context be derived from the request context for proper cancellation?
Additionally, note that the past review comment about gin.Context concurrency appears to be resolved—the goroutines no longer use
cdirectly, only extracted data (loginParam) and a separate context (loginCtx).
🧹 Nitpick comments (1)
coordinator/internal/controller/proxy/client_manager.go (1)
76-121: Consider adding retry limits or exponential backoff.The login retry loop runs indefinitely with a fixed wait duration. While this may be intentional for a persistent background process, consider adding either:
- A maximum retry count with eventual failure propagation
- Exponential backoff to reduce load during extended outages
- Periodic health checks to detect prolonged failures
This would improve observability and allow callers to detect persistent upstream issues.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
common/go.mod(1 hunks)coordinator/internal/controller/proxy/auth.go(1 hunks)coordinator/internal/controller/proxy/client_manager.go(1 hunks)coordinator/internal/controller/proxy/prover_session.go(1 hunks)coordinator/internal/logic/libzkp/message_types.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
coordinator/internal/controller/proxy/auth.go (9)
coordinator/internal/controller/api/auth.go (3)
AuthController(18-20)NewAuthController(29-33)NewAuthControllerWithLogic(22-26)coordinator/internal/controller/proxy/controller.go (1)
Clients(21-21)coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/config/proxy_config.go (2)
ProxyConfig(48-52)ProxyManager(13-19)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)coordinator/internal/logic/auth/login.go (1)
NewLoginLogicWithSimpleDeduplicator(43-45)coordinator/internal/types/auth.go (6)
LoginParameterWithHardForkName(52-55)LoginParameter(58-62)Message(42-49)PublicKey(16-16)ProverName(18-18)ProverVersion(20-20)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)
coordinator/internal/logic/libzkp/message_types.go (1)
common/types/message/message.go (4)
ProofType(14-14)ProofTypeChunk(33-33)ProofTypeBatch(35-35)ProofTypeBundle(37-37)
coordinator/internal/controller/proxy/client_manager.go (3)
coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)coordinator/internal/types/auth.go (5)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)ProverName(18-18)ProverVersion(20-20)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)
coordinator/internal/controller/proxy/prover_session.go (8)
coordinator/internal/controller/proxy/persistent.go (1)
NewProverDataPersist(17-19)coordinator/internal/types/auth.go (5)
LoginSchema(28-31)LoginParameter(58-62)Message(42-49)ProverName(18-18)PublicKey(16-16)coordinator/internal/controller/proxy/client_manager.go (1)
Client(18-22)common/types/errno.go (1)
ErrJWTTokenExpired(12-12)coordinator/internal/types/get_task.go (1)
GetTaskParameter(4-9)common/types/response.go (1)
Response(11-15)coordinator/internal/controller/proxy/auth.go (1)
LoginParamCache(28-28)coordinator/internal/types/submit_proof.go (1)
SubmitProofParameter(4-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
🔇 Additional comments (10)
coordinator/internal/logic/libzkp/message_types.go (1)
10-13: LGTM!The task type constants are clearly defined and align with the message.ProofType values.
common/go.mod (1)
15-15: Address low test coverage for the new decoding functionality.The Codecov report flags
common/types/response.gowith 0.00% patch coverage and 11 missing lines. Since theDecodeDatamethod that uses this new mapstructure dependency appears to lack test coverage, consider adding unit tests to verify the decoding behavior, especially edge cases around JSON tag mapping.coordinator/internal/controller/proxy/client_manager.go (2)
136-184: Client initialization pattern looks correct.The double-checked locking pattern and context management are appropriate. The use of
context.Background()(line 164) for the login goroutine is intentional—it allows the login process to continue even if the initial caller cancels, which is correct for a persistent upstream connection.
186-210: The public key format is correct and follows the established pattern.The codebase already uses
common.Bytes2Hexwithout the "0x" prefix for public key generation in test files and mock code, exactly matching the code under review. The existing test suite (auth_test.go, mock_prover.go) validates this format works correctly with the LoginParameter.SignWithKey() and Verify() methods. The hex format without prefix is the intended standard for internal coordinator authentication, consistent throughout the codebase.coordinator/internal/controller/proxy/prover_session.go (3)
110-200: Concurrency and retry logic looks correct.The phase-based token versioning and completion context pattern properly handle concurrent login attempts. The JWT expiry retry (lines 164-179) aligns with the SDK behavior as noted in the comment.
222-258: GetTask error handling is correct.The token refresh and error wrapping (line 253) properly uses
%v, unlike the similar code in SubmitProof.
73-94: Concern partially addressed; consider explicit cleanup of deprecated data.The
willDeprecatedDatamechanism safely promotes old sessions back to active storage when accessed viaGet()(lines 60-62), and concurrent evictions are thread-safe due to proper locking. However,willDeprecatedDatais only overwritten on the next cache rollover (line 88), meaning if the size limit isn't reached again, deprecated data persists indefinitely. To prevent potential memory accumulation in low-traffic scenarios, consider explicitly clearingwillDeprecatedDatawhen it's no longer needed, such as after a timeout or when promoting all remaining entries back tom.data.coordinator/internal/controller/proxy/auth.go (3)
1-30: LGTM!Imports and constants are well-organized and appropriate for the proxy authentication functionality.
59-61: LGTM on recursive proxy login guard.This check prevents infinite recursion when a proxy tries to log in to another proxy.
35-39: No issues found—the dummy Verifier is correctly scoped and safe.The login logic only accesses
OpenVMVkMap, which is initialized in the dummy verifier. The uninitialized maps (ChunkVk,BatchVk,BundleVk) are never accessed in the login flow. The dummy verifier is locally scoped toNewAuthControllerand isolated fromproof_receiver, which uses a separate verifier instance. No runtime panic risk exists.
| github.com/gin-gonic/gin v1.9.1 | ||
| github.com/mattn/go-colorable v0.1.13 | ||
| github.com/mattn/go-isatty v0.0.20 | ||
| github.com/mitchellh/mapstructure v1.5.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify mapstructure v1.5.0 is current and has no security advisories.
The dependency is being added as a direct dependency (moved from indirect) to support the new DecodeData method in common/types.Response. However, since my knowledge is current only through March 2025 and this PR is from July 2025, I cannot verify whether v1.5.0 is the latest/recommended version or if security advisories exist.
🌐 Web query:
What is the latest version of mapstructure package and are there any security advisories for v1.5.0?
💡 Result:
Latest release: go-viper/mapstructure v2.4.0 (release 15 Jul 2025). [1]
Security advisories: yes — multiple advisories (e.g., CVE-2025-52893 / GHSA-fv92-fjc5-jj9h) list mapstructure v1.5.0 as affected (sensitive data insertion into logs); upgrade to a fixed version (v2.3.0+ or v2.4.0) is recommended. [2][3]
Sources:
- GitHub releases (go-viper/mapstructure) — v2.4.0. [1]
- OSV / CVE record showing affected versions including v1.5.0 (CVE-2025-52893). [2]
- Snyk advisory (CVE-2025-52893 / GHSA-fv92-fjc5-jj9h) recommending upgrade to v2.3.0+. [3]
Upgrade mapstructure to v2.4.0 to resolve security vulnerability CVE-2025-52893.
Version 1.5.0 is affected by security advisory CVE-2025-52893 / GHSA-fv92-fjc5-jj9h (sensitive data insertion into logs); upgrade to v2.3.0+ or v2.4.0 is required. The latest version is v2.4.0 (released July 15, 2025).
🤖 Prompt for AI Agents
In common/go.mod at line 15 update the mapstructure dependency to the v2 module
and version and fix imports: replace the existing
github.com/mitchellh/mapstructure v1.5.0 entry by requiring
github.com/mitchellh/mapstructure/v2 v2.4.0, then run `go get
github.com/mitchellh/mapstructure/v2@v2.4.0` and `go mod tidy` to update go.sum;
also update any source files that import "github.com/mitchellh/mapstructure" to
import "github.com/mitchellh/mapstructure/v2" and run the project's
unit/integration tests to verify nothing broke.
| func (a *AuthController) IdentityHandler(c *gin.Context) interface{} { | ||
| claims := jwt.ExtractClaims(c) | ||
| loginParam := &types.LoginParameter{} | ||
|
|
||
| if proverName, ok := claims[types.ProverName]; ok { | ||
| loginParam.Message.ProverName, _ = proverName.(string) | ||
| } | ||
|
|
||
| if proverVersion, ok := claims[types.ProverVersion]; ok { | ||
| loginParam.Message.ProverVersion, _ = proverVersion.(string) | ||
| } | ||
|
|
||
| if providerType, ok := claims[types.ProverProviderTypeKey]; ok { | ||
| num, _ := providerType.(float64) | ||
| loginParam.Message.ProverProviderType = types.ProverProviderType(num) | ||
| } | ||
|
|
||
| if signature, ok := claims[SignatureKey]; ok { | ||
| loginParam.Signature, _ = signature.(string) | ||
| } | ||
|
|
||
| if proverTypes, ok := claims[ProverTypesKey]; ok { | ||
| arr, _ := proverTypes.([]any) | ||
| for _, elm := range arr { | ||
| num, _ := elm.(float64) | ||
| loginParam.Message.ProverTypes = append(loginParam.Message.ProverTypes, types.ProverType(num)) | ||
| } | ||
| } | ||
|
|
||
| if publicKey, ok := claims[types.PublicKey]; ok { | ||
| loginParam.PublicKey, _ = publicKey.(string) | ||
| } | ||
|
|
||
| if loginParam.PublicKey != "" { | ||
|
|
||
| c.Set(LoginParamCache, loginParam) | ||
| c.Set(types.ProverName, loginParam.Message.ProverName) | ||
| // publickey will also be set since we have specified public_key as identical key | ||
| return loginParam.PublicKey | ||
| } | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Silent type assertion failures may create partially initialized LoginParameter.
The IdentityHandler extracts claims from JWT but silently ignores type assertion failures (lines 111-138). This can result in a partially populated LoginParameter being stored in the context, potentially causing subtle bugs downstream.
Consider:
- Logging warnings when type assertions fail.
- Validating that critical fields (PublicKey, ProverName) are present before proceeding.
- Validating numeric ranges for uint8 conversions (lines 121, 132) to prevent overflow when float64 values exceed 255.
Example improvement for critical fields:
if publicKey, ok := claims[types.PublicKey]; ok {
loginParam.PublicKey, _ = publicKey.(string)
}
+
+ // Validate critical fields before storing in context
+ if loginParam.PublicKey == "" {
+ log.Warn("IdentityHandler: missing or invalid public_key in JWT claims")
+ return nil
+ }For numeric conversions:
if providerType, ok := claims[types.ProverProviderTypeKey]; ok {
num, _ := providerType.(float64)
+ if num < 0 || num > 255 {
+ log.Warn("IdentityHandler: prover_provider_type out of range", "value", num)
+ }
loginParam.Message.ProverProviderType = types.ProverProviderType(num)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (a *AuthController) IdentityHandler(c *gin.Context) interface{} { | |
| claims := jwt.ExtractClaims(c) | |
| loginParam := &types.LoginParameter{} | |
| if proverName, ok := claims[types.ProverName]; ok { | |
| loginParam.Message.ProverName, _ = proverName.(string) | |
| } | |
| if proverVersion, ok := claims[types.ProverVersion]; ok { | |
| loginParam.Message.ProverVersion, _ = proverVersion.(string) | |
| } | |
| if providerType, ok := claims[types.ProverProviderTypeKey]; ok { | |
| num, _ := providerType.(float64) | |
| loginParam.Message.ProverProviderType = types.ProverProviderType(num) | |
| } | |
| if signature, ok := claims[SignatureKey]; ok { | |
| loginParam.Signature, _ = signature.(string) | |
| } | |
| if proverTypes, ok := claims[ProverTypesKey]; ok { | |
| arr, _ := proverTypes.([]any) | |
| for _, elm := range arr { | |
| num, _ := elm.(float64) | |
| loginParam.Message.ProverTypes = append(loginParam.Message.ProverTypes, types.ProverType(num)) | |
| } | |
| } | |
| if publicKey, ok := claims[types.PublicKey]; ok { | |
| loginParam.PublicKey, _ = publicKey.(string) | |
| } | |
| if loginParam.PublicKey != "" { | |
| c.Set(LoginParamCache, loginParam) | |
| c.Set(types.ProverName, loginParam.Message.ProverName) | |
| // publickey will also be set since we have specified public_key as identical key | |
| return loginParam.PublicKey | |
| } | |
| return nil | |
| } | |
| func (a *AuthController) IdentityHandler(c *gin.Context) interface{} { | |
| claims := jwt.ExtractClaims(c) | |
| loginParam := &types.LoginParameter{} | |
| if proverName, ok := claims[types.ProverName]; ok { | |
| loginParam.Message.ProverName, _ = proverName.(string) | |
| } | |
| if proverVersion, ok := claims[types.ProverVersion]; ok { | |
| loginParam.Message.ProverVersion, _ = proverVersion.(string) | |
| } | |
| if providerType, ok := claims[types.ProverProviderTypeKey]; ok { | |
| num, _ := providerType.(float64) | |
| if num < 0 || num > 255 { | |
| log.Warn("IdentityHandler: prover_provider_type out of range", "value", num) | |
| } | |
| loginParam.Message.ProverProviderType = types.ProverProviderType(num) | |
| } | |
| if signature, ok := claims[SignatureKey]; ok { | |
| loginParam.Signature, _ = signature.(string) | |
| } | |
| if proverTypes, ok := claims[ProverTypesKey]; ok { | |
| arr, _ := proverTypes.([]any) | |
| for _, elm := range arr { | |
| num, _ := elm.(float64) | |
| loginParam.Message.ProverTypes = append(loginParam.Message.ProverTypes, types.ProverType(num)) | |
| } | |
| } | |
| if publicKey, ok := claims[types.PublicKey]; ok { | |
| loginParam.PublicKey, _ = publicKey.(string) | |
| } | |
| // Validate critical fields before storing in context | |
| if loginParam.PublicKey == "" { | |
| log.Warn("IdentityHandler: missing or invalid public_key in JWT claims") | |
| return nil | |
| } | |
| if loginParam.PublicKey != "" { | |
| c.Set(LoginParamCache, loginParam) | |
| c.Set(types.ProverName, loginParam.Message.ProverName) | |
| // publickey will also be set since we have specified public_key as identical key | |
| return loginParam.PublicKey | |
| } | |
| return nil | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/auth.go around lines 107-149, the
IdentityHandler silently ignores type assertion failures causing a
partially-initialized LoginParameter to be stored; update the handler to check
each type assertion result and log a warning when an assertion fails, validate
that required fields (at minimum PublicKey and Message.ProverName) are non-empty
before calling c.Set and returning, and for numeric claims (provider type and
prover types) verify the float64 is within the valid uint8/enum range (0..255 or
defined enum bounds) before casting — if any critical validation fails, do not
set the context or return the public key (return nil) to avoid downstream
surprises. Ensure logging includes which claim failed and the raw value to aid
debugging.
| func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) { | ||
| // Try appending bytes from 0x0 to 0x20 until we get a valid private key | ||
| for appendByte := byte(0x0); appendByte <= 0x20; appendByte++ { | ||
| // Append the byte to input | ||
| extendedBytes := append(inputBytes, appendByte) | ||
|
|
||
| // Calculate 256-bit hash | ||
| hash := crypto.Keccak256(extendedBytes) | ||
|
|
||
| // Try to create private key from hash | ||
| if k, err := crypto.ToECDSA(hash); err == nil { | ||
| return k, nil | ||
| } | ||
| } | ||
|
|
||
| return nil, fmt.Errorf("failed to generate valid private key from input bytes") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-standard private key derivation approach.
The iterative append-and-hash approach to generate a valid ECDSA key is unusual and non-standard. While it ensures deterministic key generation from the secret, this pattern is not a recognized cryptographic key derivation function (KDF). Standard approaches like HKDF or PBKDF2 are more appropriate and well-vetted for deriving keys from secrets.
Consider using a standard KDF:
-func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) {
- // Try appending bytes from 0x0 to 0x20 until we get a valid private key
- for appendByte := byte(0x0); appendByte <= 0x20; appendByte++ {
- // Append the byte to input
- extendedBytes := append(inputBytes, appendByte)
-
- // Calculate 256-bit hash
- hash := crypto.Keccak256(extendedBytes)
-
- // Try to create private key from hash
- if k, err := crypto.ToECDSA(hash); err == nil {
- return k, nil
- }
- }
-
- return nil, fmt.Errorf("failed to generate valid private key from input bytes")
+func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) {
+ // Use a single hash of the input as the key material
+ // This is deterministic and simpler than iterating
+ hash := crypto.Keccak256(inputBytes)
+ return crypto.ToECDSA(hash)
}Note: If the secret is already a valid 32-byte key material, the single-hash approach should work. If crypto.ToECDSA still fails, the secret itself may need to be regenerated rather than working around invalid key material.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) { | |
| // Try appending bytes from 0x0 to 0x20 until we get a valid private key | |
| for appendByte := byte(0x0); appendByte <= 0x20; appendByte++ { | |
| // Append the byte to input | |
| extendedBytes := append(inputBytes, appendByte) | |
| // Calculate 256-bit hash | |
| hash := crypto.Keccak256(extendedBytes) | |
| // Try to create private key from hash | |
| if k, err := crypto.ToECDSA(hash); err == nil { | |
| return k, nil | |
| } | |
| } | |
| return nil, fmt.Errorf("failed to generate valid private key from input bytes") | |
| } | |
| func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) { | |
| // Use a single hash of the input as the key material | |
| // This is deterministic and simpler than iterating | |
| hash := crypto.Keccak256(inputBytes) | |
| return crypto.ToECDSA(hash) | |
| } |
| func fromMessageTaskType(taskType int) int { | ||
| switch message.ProofType(taskType) { | ||
| case message.ProofTypeChunk: | ||
| return TaskTypeChunk | ||
| case message.ProofTypeBatch: | ||
| return TaskTypeBatch | ||
| case message.ProofTypeBundle: | ||
| return TaskTypeBundle | ||
| default: | ||
| panic(fmt.Sprintf("unsupported proof type: %d", taskType)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace panic with error return and validate input range.
The function has two critical issues:
-
Panic on unsupported type: The panic on line 25 will crash the entire coordinator/proxy application if an unexpected proof type is received. In a production proxy/coordinator architecture, this could cause cascading failures.
-
Unsafe type conversion: Casting
inttouint8(message.ProofType) without validation is unsafe. Negative values or values >255 will be truncated/wrapped, potentially causing the switch to match incorrect cases or trigger the panic with a misleading error message.
Apply this diff to return an error instead of panicking and validate the input range:
-func fromMessageTaskType(taskType int) int {
+func fromMessageTaskType(taskType int) (int, error) {
+ if taskType < 0 || taskType > 255 {
+ return 0, fmt.Errorf("task type out of valid range: %d", taskType)
+ }
switch message.ProofType(taskType) {
case message.ProofTypeChunk:
- return TaskTypeChunk
+ return TaskTypeChunk, nil
case message.ProofTypeBatch:
- return TaskTypeBatch
+ return TaskTypeBatch, nil
case message.ProofTypeBundle:
- return TaskTypeBundle
+ return TaskTypeBundle, nil
default:
- panic(fmt.Sprintf("unsupported proof type: %d", taskType))
+ return 0, fmt.Errorf("unsupported proof type: %d", taskType)
}
}Note: This change will require updating all callers of fromMessageTaskType to handle the returned error.
🤖 Prompt for AI Agents
In coordinator/internal/logic/libzkp/message_types.go around lines 16 to 27,
replace the panic-based conversion with a safe, validated error-returning
implementation: change the function signature to return (int, error), first
validate that the incoming int is within the uint8 range (0..255) and return a
descriptive error if out of range, cast to message.ProofType only after
validation, map the supported cases to TaskTypeChunk/Batch/Bundle and return the
mapped int with nil error, and for any unsupported proof type return a clear
error instead of panicking; update all callers to handle the (int, error) return
and propagate or handle errors appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
coordinator/cmd/proxy/app/app.go (1)
76-83: Also handle SIGTERM for graceful shutdown.SIGTERM is the default shutdown signal from systemd/Kubernetes/Docker. Right now only
os.Interruptis caught, so SIGTERM will bypass the shutdown path and drop in-flight requests—this was already flagged earlier. Please importsyscalland registersyscall.SIGTERMwithsignal.Notify.+ "syscall" @@ - signal.Notify(interrupt, os.Interrupt) + signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/docker.yml(1 hunks)build/dockerfiles/coordinator-proxy.Dockerfile(1 hunks)build/dockerfiles/coordinator-proxy.Dockerfile.dockerignore(1 hunks)coordinator/cmd/proxy/app/app.go(1 hunks)coordinator/conf/config_proxy.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- coordinator/conf/config_proxy.json
🧰 Additional context used
🧬 Code graph analysis (1)
coordinator/cmd/proxy/app/app.go (8)
common/utils/flags.go (2)
CommonFlags(9-20)ConfigFileFlag(30-34)common/utils/logger.go (1)
LogSetup(15-42)common/utils/simulation.go (2)
RegisterSimulation(30-40)CoordinatorAPIApp(24-24)coordinator/internal/config/proxy_config.go (3)
NewProxyConfig(55-74)ProxyManager(13-19)ProxyConfig(48-52)common/database/db.go (2)
InitDB(48-80)CloseDB(83-92)common/observability/server.go (1)
Server(24-53)coordinator/internal/controller/proxy/controller.go (1)
InitController(24-44)coordinator/internal/route/route.go (1)
ProxyRoute(45-53)
🪛 actionlint (1.7.8)
.github/workflows/docker.yml
370-370: the runner of "docker/setup-qemu-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
372-372: the runner of "docker/setup-buildx-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
374-374: the runner of "docker/login-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
393-393: the runner of "docker/build-push-action@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
| - name: Set up QEMU | ||
| uses: docker/setup-qemu-action@v2 | ||
| - name: Set up Docker Buildx | ||
| uses: docker/setup-buildx-action@v2 | ||
| - name: Login to Docker Hub | ||
| uses: docker/login-action@v2 | ||
| with: | ||
| username: ${{ secrets.DOCKERHUB_USERNAME }} | ||
| password: ${{ secrets.DOCKERHUB_TOKEN }} | ||
| - name: Configure AWS credentials | ||
| uses: aws-actions/configure-aws-credentials@v4 | ||
| with: | ||
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| aws-region: ${{ env.AWS_REGION }} | ||
| - name: Login to Amazon ECR | ||
| id: login-ecr | ||
| uses: aws-actions/amazon-ecr-login@v2 | ||
| - name: check repo and create it if not exist | ||
| env: | ||
| REPOSITORY: coordinator-proxy | ||
| run: | | ||
| aws --region ${{ env.AWS_REGION }} ecr describe-repositories --repository-names ${{ env.REPOSITORY }} && : || aws --region ${{ env.AWS_REGION }} ecr create-repository --repository-name ${{ env.REPOSITORY }} | ||
| - name: Build and push | ||
| uses: docker/build-push-action@v3 | ||
| env: | ||
| ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} | ||
| REPOSITORY: coordinator-proxy | ||
| IMAGE_TAG: ${{ github.ref_name }} | ||
| with: | ||
| context: . | ||
| file: ./build/dockerfiles/coordinator-proxy.Dockerfile | ||
| push: true | ||
| tags: | | ||
| scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} | ||
| ${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upgrade Docker GitHub actions to Node20-compatible releases.
docker/setup-qemu-action@v2, docker/setup-buildx-action@v2, docker/login-action@v2, and docker/build-push-action@v3 all run on the removed Node16 runtime, so this job will fail on GitHub-hosted runners after Node16 was retired on November 12, 2024. Bump them to the Node20-ready releases (@v3 for the first three and @v5 for build-push) before merging. (github.blog)
- - name: Set up QEMU
- uses: docker/setup-qemu-action@v2
+ - name: Set up QEMU
+ uses: docker/setup-qemu-action@v3
@@
- - name: Set up Docker Buildx
- uses: docker/setup-buildx-action@v2
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
@@
- - name: Login to Docker Hub
- uses: docker/login-action@v2
+ - name: Login to Docker Hub
+ uses: docker/login-action@v3
@@
- - name: Build and push
- uses: docker/build-push-action@v3
+ - name: Build and push
+ uses: docker/build-push-action@v5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up QEMU | |
| uses: docker/setup-qemu-action@v2 | |
| - name: Set up Docker Buildx | |
| uses: docker/setup-buildx-action@v2 | |
| - name: Login to Docker Hub | |
| uses: docker/login-action@v2 | |
| with: | |
| username: ${{ secrets.DOCKERHUB_USERNAME }} | |
| password: ${{ secrets.DOCKERHUB_TOKEN }} | |
| - name: Configure AWS credentials | |
| uses: aws-actions/configure-aws-credentials@v4 | |
| with: | |
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | |
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | |
| aws-region: ${{ env.AWS_REGION }} | |
| - name: Login to Amazon ECR | |
| id: login-ecr | |
| uses: aws-actions/amazon-ecr-login@v2 | |
| - name: check repo and create it if not exist | |
| env: | |
| REPOSITORY: coordinator-proxy | |
| run: | | |
| aws --region ${{ env.AWS_REGION }} ecr describe-repositories --repository-names ${{ env.REPOSITORY }} && : || aws --region ${{ env.AWS_REGION }} ecr create-repository --repository-name ${{ env.REPOSITORY }} | |
| - name: Build and push | |
| uses: docker/build-push-action@v3 | |
| env: | |
| ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} | |
| REPOSITORY: coordinator-proxy | |
| IMAGE_TAG: ${{ github.ref_name }} | |
| with: | |
| context: . | |
| file: ./build/dockerfiles/coordinator-proxy.Dockerfile | |
| push: true | |
| tags: | | |
| scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} | |
| ${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} | |
| - name: Set up QEMU | |
| uses: docker/setup-qemu-action@v3 | |
| - name: Set up Docker Buildx | |
| uses: docker/setup-buildx-action@v3 | |
| - name: Login to Docker Hub | |
| uses: docker/login-action@v3 | |
| with: | |
| username: ${{ secrets.DOCKERHUB_USERNAME }} | |
| password: ${{ secrets.DOCKERHUB_TOKEN }} | |
| - name: Configure AWS credentials | |
| uses: aws-actions/configure-aws-credentials@v4 | |
| with: | |
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | |
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | |
| aws-region: ${{ env.AWS_REGION }} | |
| - name: Login to Amazon ECR | |
| id: login-ecr | |
| uses: aws-actions/amazon-ecr-login@v2 | |
| - name: check repo and create it if not exist | |
| env: | |
| REPOSITORY: coordinator-proxy | |
| run: | | |
| aws --region ${{ env.AWS_REGION }} ecr describe-repositories --repository-names ${{ env.REPOSITORY }} && : || aws --region ${{ env.AWS_REGION }} ecr create-repository --repository-name ${{ env.REPOSITORY }} | |
| - name: Build and push | |
| uses: docker/build-push-action@v5 | |
| env: | |
| ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} | |
| REPOSITORY: coordinator-proxy | |
| IMAGE_TAG: ${{ github.ref_name }} | |
| with: | |
| context: . | |
| file: ./build/dockerfiles/coordinator-proxy.Dockerfile | |
| push: true | |
| tags: | | |
| scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} | |
| ${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }} |
🧰 Tools
🪛 actionlint (1.7.8)
370-370: the runner of "docker/setup-qemu-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
372-372: the runner of "docker/setup-buildx-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
374-374: the runner of "docker/login-action@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
393-393: the runner of "docker/build-push-action@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
.github/workflows/docker.yml around lines 369 to 404: the workflow uses Docker
GitHub Actions pinned to Node16-era releases which will fail on GitHub-hosted
runners; update the action versions to Node20-compatible releases by changing
docker/setup-qemu-action@v2 -> @v3, docker/setup-buildx-action@v2 -> @v3,
docker/login-action@v2 -> @v3, and docker/build-push-action@v3 -> @v5 so the job
runs on the supported Node20 runtime.
|
|
||
| var db *gorm.DB | ||
| if dbCfg := cfg.ProxyManager.DB; dbCfg != nil { | ||
| log.Info("Apply persistent storage", "via", cfg.ProxyManager.DB.DSN) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stop logging the raw DB DSN.
Including cfg.ProxyManager.DB.DSN in INFO logs will leak database credentials or connection tokens into stdout/log aggregation; that’s a compliance/security violation. Please drop the DSN from the log entry (or explicitly redact it) before merging.
- log.Info("Apply persistent storage", "via", cfg.ProxyManager.DB.DSN)
+ log.Info("Apply persistent storage")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log.Info("Apply persistent storage", "via", cfg.ProxyManager.DB.DSN) | |
| log.Info("Apply persistent storage") |
🤖 Prompt for AI Agents
In coordinator/cmd/proxy/app/app.go around line 55, the log entry currently
prints cfg.ProxyManager.DB.DSN which leaks credentials; remove the raw DSN from
the log (or replace it with a redacted value such as "<redacted>" or just omit
the field) and instead log a non-sensitive message like "Apply persistent
storage" or include only a safe indicator (e.g., db configured=true). Ensure no
plaintext DSN is written to stdout/log aggregation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
coordinator/internal/controller/proxy/client_manager.go (1)
40-56: Non-standard private key derivation approach.This iterative append-and-hash approach was already flagged in previous reviews. Standard KDFs like HKDF or PBKDF2 should be used instead.
🧹 Nitpick comments (1)
coordinator/internal/controller/proxy/client_manager.go (1)
138-186: Consider simplifying the double-checked locking pattern.The current implementation correctly handles concurrent access, but the combination of double-checked locking, completion context tracking, and goroutine coordination adds significant complexity. Consider whether
sync.Onceor a simpler state machine might reduce cognitive load.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
coordinator/internal/controller/proxy/client_manager.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
coordinator/internal/controller/proxy/client_manager.go (4)
coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)coordinator/internal/types/auth.go (5)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)ProverName(18-18)ProverVersion(20-20)common/version/version.go (1)
Version(31-31)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
🔇 Additional comments (4)
coordinator/internal/controller/proxy/client_manager.go (4)
58-72: LGTM on constructor logic.The initialization and configuration handling is correct. The security concern with key derivation is covered separately.
125-132: LGTM!The Reset implementation correctly uses locking and pointer comparison to safely clear the cached client.
134-136: LGTM!Simple getter method is correctly implemented.
188-212: LGTM!The login parameter generation is correctly implemented with appropriate metadata, public key compression, and signature generation.
| for { | ||
| log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name) | ||
| loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam) | ||
| if err == nil && loginResp.ErrCode == 0 { | ||
| var loginResult loginSchema | ||
| err = loginResp.DecodeData(&loginResult) | ||
| if err != nil { | ||
| log.Error("login parsing data fail", "error", err) | ||
| } else { | ||
| loginCli.loginToken = loginResult.Token | ||
| log.Info("login to upstream coordinator successful", "name", cliMgr.name, "time", loginResult.Time) | ||
| // TODO: we need to parse time if we start making use of it | ||
| return | ||
| } | ||
| } else if err != nil { | ||
| log.Error("login process fail", "error", err) | ||
| } else { | ||
| log.Error("login get fail resp", "code", loginResp.ErrCode, "msg", loginResp.ErrMsg) | ||
| } | ||
|
|
||
| log.Info("login to upstream coordinator failed, retrying", "name", cliMgr.name, "error", err, "waitDuration", waitDuration) | ||
|
|
||
| timer := time.NewTimer(waitDuration) | ||
| select { | ||
| case <-ctx.Done(): | ||
| timer.Stop() | ||
| return | ||
| case <-timer.C: | ||
| // Continue to next retry | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retry loop ignores configured RetryCount.
The method retries indefinitely without checking cfg.RetryCount, which exists in the UpStream config. This could lead to excessive retry attempts when the upstream is persistently unavailable.
Consider adding retry count enforcement:
func (cliMgr *ClientManager) doLogin(ctx context.Context, loginCli *upClient) {
if cliMgr.cfg.CompatibileMode {
loginCli.loginToken = "dummy"
log.Info("Skip login process for compatible mode")
return
}
// Calculate wait time between 2 seconds and cfg.RetryWaitTime
minWait := 2 * time.Second
waitDuration := time.Duration(cliMgr.cfg.RetryWaitTime) * time.Second
if waitDuration < minWait {
waitDuration = minWait
}
+ retryCount := uint(0)
for {
+ if cliMgr.cfg.RetryCount > 0 && retryCount >= cliMgr.cfg.RetryCount {
+ log.Error("login retry limit exceeded", "name", cliMgr.name, "retries", retryCount)
+ return
+ }
+ retryCount++
+
log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name)
loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam)
if err == nil && loginResp.ErrCode == 0 {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for { | |
| log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name) | |
| loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam) | |
| if err == nil && loginResp.ErrCode == 0 { | |
| var loginResult loginSchema | |
| err = loginResp.DecodeData(&loginResult) | |
| if err != nil { | |
| log.Error("login parsing data fail", "error", err) | |
| } else { | |
| loginCli.loginToken = loginResult.Token | |
| log.Info("login to upstream coordinator successful", "name", cliMgr.name, "time", loginResult.Time) | |
| // TODO: we need to parse time if we start making use of it | |
| return | |
| } | |
| } else if err != nil { | |
| log.Error("login process fail", "error", err) | |
| } else { | |
| log.Error("login get fail resp", "code", loginResp.ErrCode, "msg", loginResp.ErrMsg) | |
| } | |
| log.Info("login to upstream coordinator failed, retrying", "name", cliMgr.name, "error", err, "waitDuration", waitDuration) | |
| timer := time.NewTimer(waitDuration) | |
| select { | |
| case <-ctx.Done(): | |
| timer.Stop() | |
| return | |
| case <-timer.C: | |
| // Continue to next retry | |
| } | |
| } | |
| retryCount := uint(0) | |
| for { | |
| if cliMgr.cfg.RetryCount > 0 && retryCount >= cliMgr.cfg.RetryCount { | |
| log.Error("login retry limit exceeded", "name", cliMgr.name, "retries", retryCount) | |
| return | |
| } | |
| retryCount++ | |
| log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name) | |
| loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam) | |
| if err == nil && loginResp.ErrCode == 0 { | |
| var loginResult loginSchema | |
| err = loginResp.DecodeData(&loginResult) | |
| if err != nil { | |
| log.Error("login parsing data fail", "error", err) | |
| } else { | |
| loginCli.loginToken = loginResult.Token | |
| log.Info("login to upstream coordinator successful", "name", cliMgr.name, "time", loginResult.Time) | |
| // TODO: we need to parse time if we start making use of it | |
| return | |
| } | |
| } else if err != nil { | |
| log.Error("login process fail", "error", err) | |
| } else { | |
| log.Error("login get fail resp", "code", loginResp.ErrCode, "msg", loginResp.ErrMsg) | |
| } | |
| log.Info("login to upstream coordinator failed, retrying", "name", cliMgr.name, "error", err, "waitDuration", waitDuration) | |
| timer := time.NewTimer(waitDuration) | |
| select { | |
| case <-ctx.Done(): | |
| timer.Stop() | |
| return | |
| case <-timer.C: | |
| // Continue to next retry | |
| } | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/client_manager.go around lines 92 to
122, the login retry loop currently retries forever and ignores the UpStream
cfg.RetryCount; fix this by adding a retry counter (or loop over attempts) that
increments on each failed attempt and checks against cfg.RetryCount before
sleeping/retrying, log a clear error when retry limit is reached and stop
retrying (return or cancel), also ensure ctx.Done() is still respected and any
timer is stopped when exiting so we don't leak timers.
| completionCtx = cliMgr.cachedCli.completionCtx | ||
| } else { | ||
| // Set new completion context and launch login goroutine | ||
| ctx, completionDone := context.WithCancel(context.TODO()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace context.TODO() with context.Background().
context.TODO() signals that the appropriate context is unclear or not yet determined. Since this is for a background login process, context.Background() is more appropriate.
Apply this diff:
- ctx, completionDone := context.WithCancel(context.TODO())
+ ctx, completionDone := context.WithCancel(context.Background())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ctx, completionDone := context.WithCancel(context.TODO()) | |
| ctx, completionDone := context.WithCancel(context.Background()) |
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/client_manager.go around line 158,
replace context.TODO() with context.Background() so the background login process
uses the proper background context; update the call creating ctx, completionDone
to use context.Background() instead of context.TODO().
| case <-ctx.Done(): | ||
| return nil | ||
| case <-completionCtx.Done(): | ||
| cli := completionCtx.Value(loginCliKey).(*upClient) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unchecked type assertion could panic.
If the context value is missing or has the wrong type, this will panic. While it should always be *upClient in normal flow, defensive programming suggests checking.
Apply this diff:
- cli := completionCtx.Value(loginCliKey).(*upClient)
+ cli, ok := completionCtx.Value(loginCliKey).(*upClient)
+ if !ok {
+ log.Error("unexpected context value type", "name", cliMgr.name)
+ return nil
+ }
return cli🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/client_manager.go around line 183, the
code does an unchecked type assertion cli :=
completionCtx.Value(loginCliKey).(*upClient) which can panic if the context
value is missing or of the wrong type; change it to retrieve the value, perform
a comma-ok type assertion (v, ok :=
completionCtx.Value(loginCliKey).(*upClient)), check ok and v != nil, and handle
the error path by returning an appropriate error (or logging and aborting the
operation) instead of allowing a panic so callers get a clear failure instead of
a runtime crash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (6)
coordinator/internal/controller/proxy/auth.go (1)
107-149: Strengthen JWT claim decoding and validation inIdentityHandler
IdentityHandlerstill silently ignores type assertion failures and only checks thatPublicKeyis non‑empty before accepting the token and storingloginParamin the context. This can produce partially initializedLoginParameterinstances if claims are missing/malformed, making downstream behavior hard to reason about.Concretely:
- All
.(string)/.(float64)assertions ignore theokflag.- There is no validation that required fields like
Message.ProverNameare present.- Numeric claims (provider type, prover types) are cast to uint8 enums with no range checking.
Consider:
- Logging when a claim has an unexpected type or is missing but required.
- Requiring at least
PublicKeyandMessage.ProverNameto be non‑empty before callingc.Set(...)and returning the public key.- Adding basic range checks around numeric casts.
This will make authentication failures more visible and reduce the risk of subtle bugs due to half‑filled identity structs.
coordinator/internal/controller/proxy/get_task.go (1)
218-228: Upstream failures masked as empty-task response.The loop at lines 219-224 doesn't capture errors from failed upstream calls. If every upstream returns an error, the method responds with
ErrCoordinatorEmptyProofData(line 228), hiding the actual failure from the prover and masking potential outages.Apply this diff to surface the last upstream error:
+ var lastErr error + var lastCode int // Iterate over the shuffled keys for _, n := range keys { if err, code := getTask(ptc.clients[n]); err == nil && code == 0 { // get task done return + } else if err != nil { + lastErr = err + lastCode = code } } + if lastErr != nil { + types.RenderFailure(ctx, lastCode, lastErr) + return + } + log.Debug("get no task from upstream", "cli", proverName) // if all get task failed, throw empty proof resp types.RenderFailure(ctx, types.ErrCoordinatorEmptyProofData, fmt.Errorf("get empty prover task"))coordinator/internal/controller/proxy/client_manager.go (4)
43-59: Non-standard key derivation pattern.The iterative append-and-hash approach is not a recognized cryptographic KDF. While deterministic, its security properties are unclear and it's not well-vetted. Use a standard KDF like HKDF, or if the secret is already suitable entropy, a single Keccak256 hash should suffice.
Apply this diff to use a single hash:
-func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) { - // Try appending bytes from 0x0 to 0x20 until we get a valid private key - for appendByte := byte(0x0); appendByte <= 0x20; appendByte++ { - // Append the byte to input - extendedBytes := append(inputBytes, appendByte) - - // Calculate 256-bit hash - hash := crypto.Keccak256(extendedBytes) - - // Try to create private key from hash - if k, err := crypto.ToECDSA(hash); err == nil { - return k, nil - } - } - - return nil, fmt.Errorf("failed to generate valid private key from input bytes") +func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) { + // Use a single hash of the input as the key material + hash := crypto.Keccak256(inputBytes) + return crypto.ToECDSA(hash) }
95-125: Infinite retry loop ignores configured RetryCount.The loop retries indefinitely, ignoring
cfg.RetryCountfrom theUpStreamconfig. This can lead to excessive retry attempts when the upstream is persistently unavailable, wasting resources and delaying failure detection.Apply this diff to enforce retry limits:
+ retryCount := uint(0) for { + if cliMgr.cfg.RetryCount > 0 && retryCount >= cliMgr.cfg.RetryCount { + log.Error("login retry limit exceeded", "name", cliMgr.name, "retries", retryCount) + return + } + retryCount++ + log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name) loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam)
158-158: Replace context.TODO() with context.Background().
context.TODO()signals unclear context selection. Since this is for a background login process,context.Background()is the appropriate choice.Apply this diff:
- ctx, completionDone := context.WithCancel(context.TODO()) + ctx, completionDone := context.WithCancel(context.Background())
191-191: Unchecked type assertion can panic.If the context value is missing or has an unexpected type, this assertion will panic. While it should always be
*upClientin normal flow, defensive programming suggests using a checked assertion.Apply this diff:
- cli := completionCtx.Value(loginCliKey).(*upClient) + cli, ok := completionCtx.Value(loginCliKey).(*upClient) + if !ok || cli == nil { + log.Error("unexpected context value type for login client", "name", cliMgr.name) + return nil + } return cli
🧹 Nitpick comments (5)
coordinator/internal/controller/proxy/client.go (1)
69-123: Login should honor the passed context for cancellation/timeouts
Logincurrently builds requests withhttp.NewRequestand never attaches thectxargument, so upstream calls cannot be cancelled by the caller’s context (only by the http.Client timeout). For consistency withProxyLogin,GetTask, andSubmitProof, you should propagatectx:- req, err := http.NewRequest("GET", url, nil) + req, err := http.NewRequestWithContext(ctx, "GET", url, nil) ... - req, err = http.NewRequest("POST", url, bytes.NewBuffer(jsonData)) + req, err = http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))This keeps login attempts bounded by both the client’s timeout and any upstream cancellation from the caller.
coordinator/internal/controller/proxy/submit_proof.go (1)
67-93: Align submit‑proof error handling with operation semanticsThe control flow around upstream submission is correct, but the error reporting is a bit misleading:
- On upstream error you use
types.ErrCoordinatorGetTaskFailureeven though this is in the submit‑proof path.- The non‑zero
resp.ErrCodebranch logs"Upstream has error resp for get task"in a submit handler.Consider introducing/using a submit‑specific errno (if available) and updating the log message so operators can immediately distinguish get‑task vs submit‑proof failures:
-} else if resp.ErrCode != 0 { - log.Error("Upstream has error resp for get task", "code", resp.ErrCode, "msg", resp.ErrMsg, "up", upstream, "cli", proverName, "taskID", realTaskID) +} else if resp.ErrCode != 0 { + log.Error("Upstream has error resp for submit proof", "code", resp.ErrCode, "msg", resp.ErrMsg, "up", upstream, "cli", proverName, "taskID", realTaskID) // simply dispatch the error from upstream to prover - types.RenderFailure(ctx, resp.ErrCode, fmt.Errorf("%s", resp.ErrMsg)) + types.RenderFailure(ctx, resp.ErrCode, fmt.Errorf("%s", resp.ErrMsg))and, if available in your errno set, prefer a submit‑specific code over
ErrCoordinatorGetTaskFailurein theerr != nilbranch.coordinator/test/proxy_test.go (1)
31-41: GlobaltestCompatibileModecan cause surprises with parallel testsThe two entry tests mutate the package‑level
testCompatibileMode:var testCompatibileMode bool func TestProxyClient(t *testing.T) { testCompatibileMode = false ... } func TestProxyClientCompatibleMode(t *testing.T) { testCompatibileMode = true ... }This works as long as these tests never run in parallel, but it becomes fragile if someone later adds
t.Parallel()or introduces other tests that depend ontestCompatibileMode.To make the tests more robust, consider:
- Passing the desired compatible‑mode flag explicitly into helpers (e.g.,
testProxyUpStreamCfg) instead of relying on a global, or- Keeping the global but guarding with comments and avoiding
t.Parallel()in this package.Not a blocker, but worth tightening to avoid future flakiness.
Also applies to: 277-297
coordinator/internal/controller/proxy/get_task.go (2)
31-36: Consider failing when prover name is missing.Defaulting to
"unknown"allows the request to proceed despite missing authentication context. While this maintains availability, it degrades observability and may mask upstream auth issues. Consider returning an error instead, or at minimum using a more distinctive placeholder like"unknown_<publicKey_hash>"to aid debugging.
118-118: Remove stale TODO comment.The constructor implementation appears complete. Remove the TODO comment or clarify what additional initialization is needed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
common/go.mod(1 hunks)coordinator/Makefile(1 hunks)coordinator/internal/controller/proxy/auth.go(1 hunks)coordinator/internal/controller/proxy/client.go(1 hunks)coordinator/internal/controller/proxy/client_manager.go(1 hunks)coordinator/internal/controller/proxy/get_task.go(1 hunks)coordinator/internal/controller/proxy/prover_session.go(1 hunks)coordinator/internal/controller/proxy/submit_proof.go(1 hunks)coordinator/internal/logic/libzkp/mock_universal_task.go(2 hunks)coordinator/internal/logic/libzkp/universal_task.go(2 hunks)coordinator/internal/orm/orm_test.go(1 hunks)coordinator/test/proxy_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- common/go.mod
- coordinator/internal/logic/libzkp/universal_task.go
- coordinator/Makefile
- coordinator/internal/controller/proxy/prover_session.go
🧰 Additional context used
🧬 Code graph analysis (7)
coordinator/internal/controller/proxy/submit_proof.go (6)
coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/controller/proxy/controller.go (2)
Clients(21-21)SubmitProof(14-14)coordinator/internal/controller/proxy/get_task.go (1)
PriorityUpstreamManager(42-46)coordinator/internal/types/submit_proof.go (1)
SubmitProofParameter(4-13)common/types/response.go (2)
RenderFailure(50-52)RenderSuccess(45-47)common/types/errno.go (2)
ErrCoordinatorParameterInvalidNo(22-22)ErrCoordinatorGetTaskFailure(24-24)
coordinator/internal/controller/proxy/get_task.go (8)
coordinator/internal/types/auth.go (2)
PublicKey(16-16)ProverName(18-18)common/types/response.go (2)
RenderFailure(50-52)RenderSuccess(45-47)common/types/errno.go (3)
ErrCoordinatorParameterInvalidNo(22-22)ErrCoordinatorGetTaskFailure(24-24)ErrCoordinatorEmptyProofData(28-28)coordinator/internal/controller/proxy/persistent.go (1)
NewProverPriorityPersist(87-89)coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/controller/proxy/controller.go (2)
Clients(21-21)GetTask(12-12)coordinator/internal/types/get_task.go (2)
GetTaskParameter(4-9)GetTaskSchema(12-19)coordinator/internal/controller/proxy/client_manager.go (1)
Client(20-27)
coordinator/internal/controller/proxy/auth.go (9)
coordinator/internal/controller/api/auth.go (3)
AuthController(18-20)NewAuthController(29-33)NewAuthControllerWithLogic(22-26)coordinator/internal/controller/proxy/controller.go (1)
Clients(21-21)coordinator/internal/controller/proxy/prover_session.go (1)
ProverManager(18-24)coordinator/internal/config/proxy_config.go (2)
ProxyConfig(48-52)ProxyManager(13-19)coordinator/internal/logic/verifier/types.go (1)
Verifier(11-17)coordinator/internal/logic/auth/login.go (1)
NewLoginLogicWithSimpleDeduplicator(43-45)coordinator/internal/types/auth.go (7)
LoginParameterWithHardForkName(52-55)LoginParameter(58-62)Message(42-49)PublicKey(16-16)ProverName(18-18)ProverVersion(20-20)ProverProviderTypeKey(22-22)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)coordinator/internal/controller/proxy/client_manager.go (1)
Client(20-27)
coordinator/internal/controller/proxy/client_manager.go (4)
coordinator/internal/controller/proxy/client.go (2)
ProverCli(28-31)ProxyCli(21-26)coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)coordinator/internal/types/auth.go (5)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)ProverName(18-18)ProverVersion(20-20)coordinator/internal/types/prover.go (3)
ProverProviderType(59-59)ProverProviderTypeProxy(82-82)ProverType(20-20)
coordinator/test/proxy_test.go (5)
coordinator/internal/config/proxy_config.go (2)
ProxyClient(32-36)UpStream(39-45)coordinator/internal/controller/proxy/client_manager.go (1)
NewClientManager(61-75)common/types/message/message.go (1)
ProofTypeChunk(33-33)common/types/errno.go (1)
ErrCoordinatorEmptyProofData(28-28)common/types/db.go (2)
ProvingStatus(140-140)ProvingTaskVerified(152-152)
coordinator/internal/controller/proxy/client.go (6)
coordinator/internal/types/auth.go (3)
LoginParameter(58-62)PublicKey(16-16)Message(42-49)common/types/response.go (1)
Response(11-15)coordinator/internal/types/get_task.go (1)
GetTaskParameter(4-9)coordinator/internal/types/submit_proof.go (1)
SubmitProofParameter(4-13)coordinator/internal/controller/proxy/client_manager.go (1)
Client(20-27)coordinator/internal/config/proxy_config.go (1)
UpStream(39-45)
coordinator/internal/logic/libzkp/mock_universal_task.go (1)
coordinator/internal/logic/libzkp/universal_task.go (1)
GenerateUniversalTask(24-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (3)
coordinator/internal/logic/libzkp/mock_universal_task.go (2)
8-8: LGTM: Import addition is correct.The
stringsimport is needed forstrings.ToLower(forkName)on line 19.
18-20: LGTM: Public wrapper correctly exposes mock functionality with all dependencies verified.The wrapper properly transforms inputs (
fromMessageTaskTypefor task type mapping,strings.ToLowerfor fork name normalization) before delegating to the internalgenerateUniversalTask. The implementation is consistent with the non-mock version inuniversal_task.go.The
fromMessageTaskTypehelper andTaskTypeconstants are properly defined inmessage_types.go(same package) and are accessible. The internal mock function acceptsexpectedVkanddecryptionKeyparameters but doesn't use them, which is acceptable for a mock implementation that returns hardcoded test data.coordinator/internal/orm/orm_test.go (1)
28-32: Guarded teardown avoids nil DB usageConditionally calling
tearDownEnv(t)only whentestApps != nilpreventsdb.DB()from being called with an uninitialized global, which would panic in tests that never callsetupEnv. Change looks correct.
| // Login extended the Login hander in api controller | ||
| func (a *AuthController) Login(c *gin.Context) (interface{}, error) { | ||
|
|
||
| loginRes, err := a.apiLogin.Login(c) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| loginParam := loginRes.(types.LoginParameterWithHardForkName) | ||
|
|
||
| if loginParam.LoginParameter.Message.ProverProviderType == types.ProverProviderTypeProxy { | ||
| return nil, fmt.Errorf("proxy do not support recursive login") | ||
| } | ||
|
|
||
| session := a.proverMgr.GetOrCreate(loginParam.PublicKey) | ||
| log.Debug("start handling login", "cli", loginParam.Message.ProverName) | ||
|
|
||
| loginCtx, cf := context.WithTimeout(context.Background(), upstreamConnTimeout) | ||
| var wg sync.WaitGroup | ||
| for _, cli := range a.clients { | ||
| wg.Add(1) | ||
| go func(cli Client) { | ||
| defer wg.Done() | ||
| if err := session.ProxyLogin(loginCtx, cli, &loginParam.LoginParameter); err != nil { | ||
| log.Error("proxy login failed during token cache update", | ||
| "userKey", loginParam.PublicKey, | ||
| "upstream", cli.Name(), | ||
| "error", err) | ||
| } | ||
| }(cli) | ||
| } | ||
| go func(cliName string) { | ||
| wg.Wait() | ||
| cf() | ||
| log.Debug("first login attempt has completed", "cli", cliName) | ||
| }(loginParam.Message.ProverName) | ||
|
|
||
| return loginParam.LoginParameter, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard type assertion on login result to avoid panics
Login assumes a.apiLogin.Login(c) always returns types.LoginParameterWithHardForkName and does a hard type assertion:
loginRes, err := a.apiLogin.Login(c)
...
loginParam := loginRes.(types.LoginParameterWithHardForkName)If the underlying API auth controller ever changes its return type, this will panic and take down the login endpoint.
Safer pattern:
- loginRes, err := a.apiLogin.Login(c)
- if err != nil {
- return nil, err
- }
- loginParam := loginRes.(types.LoginParameterWithHardForkName)
+ loginRes, err := a.apiLogin.Login(c)
+ if err != nil {
+ return nil, err
+ }
+ loginParam, ok := loginRes.(types.LoginParameterWithHardForkName)
+ if !ok {
+ return nil, fmt.Errorf("unexpected login result type: %T", loginRes)
+ }This keeps behavior explicit and avoids surprising panics if the API layer evolves.
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/auth.go around lines 50 to 87, the code
performs an unchecked type assertion on the result of a.apiLogin.Login(c) which
can panic if the returned value is not types.LoginParameterWithHardForkName;
change this to a guarded type assertion by using the two-value form (v, ok :=
loginRes.(types.LoginParameterWithHardForkName)), handle the false case by
returning a clear typed error (or wrapping the original result) instead of
proceeding, and ensure subsequent code uses the safely asserted value so the
login endpoint cannot crash due to an unexpected return type.
| func handleHttpResp(resp *http.Response) (*ctypes.Response, error) { | ||
| if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized { | ||
| defer resp.Body.Close() | ||
| var respWithData ctypes.Response | ||
| // Note: Body is consumed after decoding, caller should not read it again | ||
| if err := json.NewDecoder(resp.Body).Decode(&respWithData); err == nil { | ||
| return &respWithData, nil | ||
| } else { | ||
| return nil, fmt.Errorf("login parsing expected response failed: %v", err) | ||
| } | ||
|
|
||
| } | ||
| return nil, fmt.Errorf("login request failed with status: %d", resp.StatusCode) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Always close HTTP response bodies, even on non‑200/401 statuses
handleHttpResp only defers resp.Body.Close() when the status is 200 or 401. For any other status, the function returns without closing the body, which can leak connections and eventually exhaust the client's connection pool.
You can fix this by deferring Close() unconditionally and simplifying the control flow:
-func handleHttpResp(resp *http.Response) (*ctypes.Response, error) {
- if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized {
- defer resp.Body.Close()
- var respWithData ctypes.Response
- // Note: Body is consumed after decoding, caller should not read it again
- if err := json.NewDecoder(resp.Body).Decode(&respWithData); err == nil {
- return &respWithData, nil
- } else {
- return nil, fmt.Errorf("login parsing expected response failed: %v", err)
- }
-
- }
- return nil, fmt.Errorf("login request failed with status: %d", resp.StatusCode)
-}
+func handleHttpResp(resp *http.Response) (*ctypes.Response, error) {
+ defer resp.Body.Close()
+
+ if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized {
+ var respWithData ctypes.Response
+ // Note: Body is consumed after decoding, caller should not read it again
+ if err := json.NewDecoder(resp.Body).Decode(&respWithData); err != nil {
+ return nil, fmt.Errorf("parsing expected response failed: %w", err)
+ }
+ return &respWithData, nil
+ }
+
+ return nil, fmt.Errorf("request failed with status: %d", resp.StatusCode)
+}After this, the file‑level //nolint:bodyclose is no longer needed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func handleHttpResp(resp *http.Response) (*ctypes.Response, error) { | |
| if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized { | |
| defer resp.Body.Close() | |
| var respWithData ctypes.Response | |
| // Note: Body is consumed after decoding, caller should not read it again | |
| if err := json.NewDecoder(resp.Body).Decode(&respWithData); err == nil { | |
| return &respWithData, nil | |
| } else { | |
| return nil, fmt.Errorf("login parsing expected response failed: %v", err) | |
| } | |
| } | |
| return nil, fmt.Errorf("login request failed with status: %d", resp.StatusCode) | |
| } | |
| func handleHttpResp(resp *http.Response) (*ctypes.Response, error) { | |
| defer resp.Body.Close() | |
| if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized { | |
| var respWithData ctypes.Response | |
| // Note: Body is consumed after decoding, caller should not read it again | |
| if err := json.NewDecoder(resp.Body).Decode(&respWithData); err != nil { | |
| return nil, fmt.Errorf("parsing expected response failed: %w", err) | |
| } | |
| return &respWithData, nil | |
| } | |
| return nil, fmt.Errorf("request failed with status: %d", resp.StatusCode) | |
| } |
🤖 Prompt for AI Agents
In coordinator/internal/controller/proxy/client.go around lines 125 to 138, the
function only defers resp.Body.Close() for 200/401 paths which can leak
connections on other statuses; always defer resp.Body.Close() immediately after
ensuring resp is non-nil, then simplify the logic to decode on success statuses
and return errors otherwise; remove the file-level //nolint:bodyclose comment
once Close is guaranteed unconditionally.
| func (c *upClient) proxyLoginCompatibleMode(ctx context.Context, param *types.LoginParameter) (*ctypes.Response, error) { | ||
| mimePrivK, err := buildPrivateKey([]byte(param.PublicKey)) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| mimePkHex := common.Bytes2Hex(crypto.CompressPubkey(&mimePrivK.PublicKey)) | ||
|
|
||
| genLoginParam := func(challenge string) (*types.LoginParameter, error) { | ||
|
|
||
| // Create login parameter with proxy settings | ||
| loginParam := &types.LoginParameter{ | ||
| Message: param.Message, | ||
| PublicKey: mimePkHex, | ||
| } | ||
| loginParam.Message.Challenge = challenge | ||
|
|
||
| // Sign the message with the private key | ||
| if err := loginParam.SignWithKey(mimePrivK); err != nil { | ||
| return nil, fmt.Errorf("failed to sign login parameter: %w", err) | ||
| } | ||
|
|
||
| return loginParam, nil | ||
| } | ||
|
|
||
| return c.Login(ctx, genLoginParam) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Reusing PublicKey as private‑key seed in compatibility mode looks incorrect
In proxyLoginCompatibleMode, you derive mimePrivK via:
mimePrivK, err := buildPrivateKey([]byte(param.PublicKey))
mimePkHex := common.Bytes2Hex(crypto.CompressPubkey(&mimePrivK.PublicKey))but elsewhere (e.g., NewClientManager) the private key is derived from ProxyClient.Secret, and PublicKey is conceptually a public key, not a secret. Using param.PublicKey as the seed for buildPrivateKey is very likely to produce a different keypair than the one used to authenticate the proxy client, so the signature you generate here may not be verifiable upstream in compatible mode.
It would be safer for compatibility mode to reuse the proxy’s configured private key (from the client manager) rather than deriving a new one from param.PublicKey. That likely means threading the proxy’s key (or a signer) into upClient instead of recomputing it from the login parameter.
I'll verify the review comment by examining the codebase to understand the cryptographic key handling and whether the concern is valid.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find buildPrivateKey function definition
rg -n "func buildPrivateKey" coordinator/
Length of output: 127
🏁 Script executed:
#!/bin/bash
# Find NewClientManager and how it handles Secret
rg -n "NewClientManager\|ProxyClient\.Secret" coordinator/ -A 5 -B 2Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for how upClient is created and what context it has
rg -n "type upClient\|func.*upClient\|NewUpClient" coordinator/ -A 3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at LoginParameter struct definition
rg -n "type LoginParameter\|PublicKey.*string" coordinator/ -B 2 -A 8Length of output: 0
🏁 Script executed:
#!/bin/bash
# Get the buildPrivateKey implementation
sed -n '43,80p' coordinator/internal/controller/proxy/client_manager.goLength of output: 1036
🏁 Script executed:
#!/bin/bash
# Look for Client struct definition and context
rg -n "type.*Client struct" coordinator/internal/controller/proxy/ -A 10Length of output: 838
🏁 Script executed:
#!/bin/bash
# Find the proxyLoginCompatibleMode method to understand its receiver
rg -n "proxyLoginCompatibleMode" coordinator/ -B 5 -A 30Length of output: 5603
🏁 Script executed:
#!/bin/bash
# Find where ProxyClient or similar config is used
rg -n "ProxyClient\|Secret" coordinator/internal/controller/proxy/ -B 2 -A 2Length of output: 0
---
🏁 Script executed:
```shell
#!/bin/bash
# Find LoginParameter type and PublicKey field
fd -e go . coordinator/ -x grep -l "type LoginParameter" {} \; 2>/dev/null | head -5
Length of output: 34
🏁 Script executed:
#!/bin/bash
# Search for types.LoginParameter or similar to find the type definition
rg -n "type LoginParameter" coordinator/ -A 15Length of output: 1545
🏁 Script executed:
#!/bin/bash
# Find how ClientManager creates or uses upClient
rg -n "ClientManager.*struct\|func.*ClientManager" coordinator/internal/controller/proxy/client_manager.go -A 10Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for where upClient is used and how it gets the param with PublicKey
rg -n "newUpClient\|ProxyLogin" coordinator/internal/controller/proxy/ -B 5 -A 5Length of output: 0
---
🏁 Script executed:
```shell
#!/bin/bash
# Find ClientManager struct definition and its methods
rg -n "type ClientManager struct" coordinator/internal/controller/proxy/ -A 20
Length of output: 1874
🏁 Script executed:
#!/bin/bash
# Search for GetUpClient or how ClientManager provides upClient
rg -n "GetUpClient\|\.upClient\|func.*ClientManager\)" coordinator/internal/controller/proxy/client_manager.go -A 8Length of output: 0
🏁 Script executed:
#!/bin/bash
# Find who calls proxyLoginCompatibleMode and creates the LoginParameter
rg -n "proxyLoginCompatibleMode\|ProxyLogin" coordinator/internal/controller/proxy/client_manager.go -B 10 -A 5Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at the entire client_manager.go to understand the relationship
wc -l coordinator/internal/controller/proxy/client_manager.go && sed -n '1,120p' coordinator/internal/controller/proxy/client_manager.goLength of output: 3430
---
🏁 Script executed:
```shell
#!/bin/bash
# Find genLoginParam method
rg -n "genLoginParam" coordinator/internal/controller/proxy/ -B 3 -A 10
Length of output: 5006
🏁 Script executed:
#!/bin/bash
# Find ClientAsProxy method to understand how upClient is used
rg -n "ClientAsProxy\|func.*ClientManager.*Client" coordinator/internal/controller/proxy/client_manager.go -A 15Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at rest of client_manager.go to see complete flow
sed -n '120,220p' coordinator/internal/controller/proxy/client_manager.goLength of output: 2686
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for where ProxyLogin is called in the codebase
rg -n "ProxyLogin\(" coordinator/ -B 3 -A 2
Length of output: 3267
🏁 Script executed:
#!/bin/bash
# Look for ProxyCli interface to understand what methods are called
rg -n "type ProxyCli\|interface ProxyCli" coordinator/ -A 15Length of output: 0
| func (p *PriorityUpstreamManager) Get(key string) (string, bool) { | ||
|
|
||
| p.RLock() | ||
| value, exists := p.data[key] | ||
| p.RUnlock() | ||
|
|
||
| if !exists { | ||
| if v, err := p.proverPriorityPersist.Get(key); err != nil { | ||
| log.Error("persistent priority record read failure", "error", err, "key", key) | ||
| } else if v != "" { | ||
| log.Debug("restore record from persistent layer", "key", key, "value", v) | ||
| return v, true | ||
| } | ||
| } | ||
|
|
||
| return value, exists | ||
| } | ||
|
|
||
| // Set sets the priority upstream for a given key | ||
| func (p *PriorityUpstreamManager) Set(key, value string) { | ||
| defer func() { | ||
| if err := p.proverPriorityPersist.Update(key, value); err != nil { | ||
| log.Error("update priority record failure", "error", err, "key", key, "value", value) | ||
| } | ||
| }() | ||
| p.Lock() | ||
| defer p.Unlock() | ||
| p.data[key] = value | ||
| } | ||
|
|
||
| // Delete removes the priority upstream for a given key | ||
| func (p *PriorityUpstreamManager) Delete(key string) { | ||
| defer func() { | ||
| if err := p.proverPriorityPersist.Del(key); err != nil { | ||
| log.Error("delete priority record failure", "error", err, "key", key) | ||
| } | ||
| }() | ||
| p.Lock() | ||
| defer p.Unlock() | ||
| delete(p.data, key) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nil pointer dereference in PriorityUpstreamManager methods.
All three methods (Get, Set, Delete) unconditionally call methods on p.proverPriorityPersist without nil checks. When PriorityUpstreamManager is created via NewPriorityUpstreamManager() (lines 49-53), the proverPriorityPersist field remains nil, causing panics at lines 71, 85, and 97.
Apply this diff to add nil guards:
func (p *PriorityUpstreamManager) Get(key string) (string, bool) {
p.RLock()
value, exists := p.data[key]
p.RUnlock()
if !exists {
+ if p.proverPriorityPersist != nil {
if v, err := p.proverPriorityPersist.Get(key); err != nil {
log.Error("persistent priority record read failure", "error", err, "key", key)
} else if v != "" {
log.Debug("restore record from persistent layer", "key", key, "value", v)
return v, true
}
+ }
}
return value, exists
}
func (p *PriorityUpstreamManager) Set(key, value string) {
+ if p.proverPriorityPersist != nil {
defer func() {
if err := p.proverPriorityPersist.Update(key, value); err != nil {
log.Error("update priority record failure", "error", err, "key", key, "value", value)
}
}()
+ }
p.Lock()
defer p.Unlock()
p.data[key] = value
}
func (p *PriorityUpstreamManager) Delete(key string) {
+ if p.proverPriorityPersist != nil {
defer func() {
if err := p.proverPriorityPersist.Del(key); err != nil {
log.Error("delete priority record failure", "error", err, "key", key)
}
}()
+ }
p.Lock()
defer p.Unlock()
delete(p.data, key)
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
coordinator/internal/logic/libzkp/lib.go (1)
35-40: Add error handling to InitVerifier.The function calls
C.init_verifierbut provides no way to detect initialization failures. If the C function fails (e.g., due to invalid configuration or missing resources), the error will be silent, potentially causing difficult-to-debug issues during later verification calls.Consider modifying the C binding to return an error status, or at minimum, document that callers should verify initialization success through subsequent operations.
🧹 Nitpick comments (2)
coordinator/internal/logic/libzkp/lib.go (2)
76-98: Consider consistent error handling pattern.The function returns an empty string when
resultPtris nil (line 90), whileUniversalTaskCompatibilityFix(lines 125-126) returns an explicit error for the same condition. This inconsistency could lead to ambiguity—an empty string might be a valid result or an error.Consider returning
(string, error)for consistency withUniversalTaskCompatibilityFix, or document that an empty string indicates an error.
1-134: Wrapper implementations verified—consider adding thread safety documentation.Verification confirms that
GenerateUniversalTaskis properly exposed in both wrapper files (universal_task.go:24andmock_universal_task.go:18), with matching signatures and correct parameter handling. TheTaskTypeconstants (TaskTypeChunk,TaskTypeBatch,TaskTypeBundle) are defined inmessage_types.go, and thefromMessageTaskTypehelper function exists and is correctly used in both implementations. Build tag separation is working as intended.The refactoring properly exposes all moved functionality. As an optional enhancement, consider documenting the thread safety characteristics of these CGO functions—particularly for concurrent usage patterns in the proxy architecture.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
coordinator/Makefile(1 hunks)coordinator/internal/logic/libzkp/lib.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- coordinator/Makefile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: check
- GitHub Check: check
- GitHub Check: tests
- GitHub Check: tests
- GitHub Check: tests
🔇 Additional comments (5)
coordinator/internal/logic/libzkp/lib.go (5)
24-32: LGTM!The helper functions provide clean abstractions for C string conversion and memory management. The use of
unsafe.Pointeris appropriate for CGO interop.
43-73: LGTM!The three verification functions follow a consistent, correct pattern. The use of
strings.ToLoweronforkNameensures case-insensitive matching, and memory management withdeferis proper.
101-117: LGTM!The workaround of checking file existence via
os.Stat(lines 112-114) is reasonable given that the C function doesn't return an error code. The inline comment (line 111) clearly documents this limitation.
120-134: LGTM!Proper error handling with explicit error return, correct memory management, and clear error message. This is a good pattern for CGO function wrappers.
1-2: Build constraint syntax is compatible with project Go version.The
//go:builddirective is supported in Go 1.22 (specified incoordinator/go.mod), which exceeds the minimum required Go 1.17. No action needed.
The PR has induced a "coordinator_proxy" app to support the provers cluster mode.
Designation: https://www.notion.so/scrollzkp/Almost-stateless-prover-proxy-2537792d22af80b08db3f8397e0824bd?d=29c7792d22af80ee98df001caf30b7e1#2537792d22af806880efc8cea92d0e19
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.